<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jamie Gaskins</title>
    <description>The latest articles on DEV Community by Jamie Gaskins (@jgaskins).</description>
    <link>https://dev.to/jgaskins</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jgaskins"/>
    <language>en</language>
    <item>
      <title>Docker Desktop Changes</title>
      <dc:creator>Jamie Gaskins</dc:creator>
      <pubDate>Wed, 01 Sep 2021 21:54:37 +0000</pubDate>
      <link>https://dev.to/jgaskins/docker-desktop-changes-18d6</link>
      <guid>https://dev.to/jgaskins/docker-desktop-changes-18d6</guid>
      <description>&lt;p&gt;Docker just updated their terms of use for Docker Desktop that requires a paid subscription for companies that have more than 250 employees or $10M in annual revenue. Some people have some feelings about this that I don’t quite understand. Can someone break it down for me?&lt;/p&gt;

</description>
      <category>docker</category>
      <category>explainlikeimfive</category>
      <category>discuss</category>
      <category>eli5</category>
    </item>
    <item>
      <title>I've Joined Forem as a Principal SRE</title>
      <dc:creator>Jamie Gaskins</dc:creator>
      <pubDate>Thu, 04 Mar 2021 15:35:36 +0000</pubDate>
      <link>https://dev.to/jgaskins/i-ve-joined-forem-as-a-principal-sre-8p5</link>
      <guid>https://dev.to/jgaskins/i-ve-joined-forem-as-a-principal-sre-8p5</guid>
      <description>&lt;p&gt;A couple of months ago, &lt;a href="https://dev.to/molly"&gt;Molly&lt;/a&gt; reached out to let me know that an SRE role had opened up at Forem and asked if it was something I was interested in. At the time, I was a technical lead at &lt;a href="https://snapdocs.com" rel="noopener noreferrer"&gt;Snapdocs&lt;/a&gt;, focusing on performance and stability of their service-oriented architecture. I was very much not ready to leave yet because there was still so much more work to do on that platform and so many things were really moving in a lot of the right directions. Why would I want to leave?&lt;/p&gt;

&lt;p&gt;Well, I'd been keeping my eye on DEV as a company for the past couple of years now. My first interactions with folks on the team really showed their commitment to building a welcoming community, especially for folks who were new to software development. I mean, how awesome would it be to work with a team that was putting so much effort into building a tech-focused community that wants to build people up and actively works against people that would tear others down?&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1095887827206893568-408" src="https://platform.twitter.com/embed/Tweet.html?id=1095887827206893568"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1095887827206893568-408');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1095887827206893568&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;And then DEV went open-source, the company rebranded into Forem, and grew from running a single website for a specific community into a platform to foster &lt;em&gt;many&lt;/em&gt; communities. I really wanted to be a part of this. And since I've done &lt;a href="https://github.com/jgaskins" rel="noopener noreferrer"&gt;a lot of open source for free&lt;/a&gt; and always wanted to work on open source for a living, the idea was that much more enticing.&lt;/p&gt;

&lt;p&gt;With all that in mind, I didn't want to pass on an opportunity like this without at least exploring the idea, so I agreed to chat with Molly about it. I asked her approximately 16 million questions and she had great answers for all of them[1]. This made it a really difficult decision (I had two choices, and they were both great for various reasons), but I decided to take the plunge and come work on the Forem platform.&lt;/p&gt;

&lt;p&gt;I'm really excited to see what I'll be able to accomplish here. The community is fantastic, my colleagues are fantastic (they're way cooler than I am, though, so I've gotta step up my game a bit), and there are a lot of fantastic problems to solve.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdigzgf5sincp802jm4g.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdigzgf5sincp802jm4g.gif" alt="Everything's coming up Milhouse!"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;[1] One of my favorite responses from her was actually "we're still figuring that out if you'd like to help guide that process". If you're a hiring manager, that's a great response to a candidate's more difficult questions! It shows a culture of collaboration and selects for people who really enjoy the thing that that unfinished process involves.&lt;/p&gt;

</description>
      <category>meta</category>
      <category>personalnews</category>
      <category>news</category>
      <category>career</category>
    </item>
    <item>
      <title>Performance Comparison, Rust vs Crystal with Redis</title>
      <dc:creator>Jamie Gaskins</dc:creator>
      <pubDate>Fri, 26 Jun 2020 13:45:18 +0000</pubDate>
      <link>https://dev.to/jgaskins/performance-comparison-rust-vs-crystal-with-redis-1a17</link>
      <guid>https://dev.to/jgaskins/performance-comparison-rust-vs-crystal-with-redis-1a17</guid>
      <description>&lt;p&gt;You often hear about how fast languages like Rust and Go are. People port all kinds of things to Rust to make them faster. It's common to hear about a company porting a Ruby microservice to Go or &lt;a href="https://bennetthardwick.com/blog/writing-safe-efficient-parallel-native-node-extensions-in-rust-and-neon/"&gt;writing native extensions for a dynamic language in Rust&lt;/a&gt; for extra performance.&lt;/p&gt;

&lt;p&gt;Crystal also compiles your apps into blazing-fast native code, so today I decided to try comparing Rust and Crystal side-by-side in talking to a Redis database.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Benchmark
&lt;/h2&gt;

&lt;p&gt;I wanted something realistic, and most benchmarks I could find were things like Mandelbrot and digits of π. They're CPU-intensive, absolutely, but they're nothing like the workload a typical web app has.&lt;/p&gt;

&lt;p&gt;The benchmark I went with was to connect to a Redis database and run a bunch of pipelined commands. Pipelining means we're sending all of the commands before reading any of them. Because we're not waiting for the result after sending each command, this drastically reduces the impact that latency has on the benchmark. For example, instead of this sequence:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Send command&lt;/li&gt;
&lt;li&gt;Read result&lt;/li&gt;
&lt;li&gt;Send command&lt;/li&gt;
&lt;li&gt;Read result&lt;/li&gt;
&lt;li&gt;Send command&lt;/li&gt;
&lt;li&gt;Read result&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What we do instead is this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Send command&lt;/li&gt;
&lt;li&gt;Send command&lt;/li&gt;
&lt;li&gt;Send command&lt;/li&gt;
&lt;li&gt;Read result&lt;/li&gt;
&lt;li&gt;Read result&lt;/li&gt;
&lt;li&gt;Read result&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This way we pay the latency cost once between the last send and the first read instead of 3 times.&lt;/p&gt;

&lt;p&gt;For our benchmark, we're going to run a mix of common Redis operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set a key&lt;/li&gt;
&lt;li&gt;Get a key that exists&lt;/li&gt;
&lt;li&gt;Get a key that does not exist&lt;/li&gt;
&lt;li&gt;Increment the value for a key&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We do each of these 100k times. The more work we do in this pipeline, the less effect latency has and the more effective the benchmark is. The reason we run a mix of commands isn't so much about what Redis does with them (we're not benchmarking Redis), but what Redis returns for them. The &lt;code&gt;SET&lt;/code&gt; and &lt;code&gt;GET&lt;/code&gt; commands in Redis return strings, which require heap allocations. &lt;code&gt;INCR&lt;/code&gt; returns an integer, which is usually allocated on the stack (no &lt;code&gt;malloc&lt;/code&gt; / &lt;code&gt;free&lt;/code&gt; needed) and doesn't necessarily require a heap allocation (though the implementation might parse the integer from an intermediate string, which could involve an allocation).&lt;/p&gt;

&lt;p&gt;First we'll look at the code in each language, then the results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rust
&lt;/h2&gt;

&lt;p&gt;We're using &lt;a href="https://github.com/mitsuhiko/redis-rs"&gt;the &lt;code&gt;redis-rs&lt;/code&gt; Rust crate&lt;/a&gt; for this app. We construct a Redis pipeline with &lt;code&gt;redis::pipe()&lt;/code&gt;, fill it with data, and then send that data to the connection.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;time&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="n"&gt;Instant&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;ITERATIONS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;usize&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100_000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"redis://127.0.0.1:6379"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.unwrap&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;con&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="nf"&gt;.get_connection&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="nf"&gt;.unwrap&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Instant&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;pipe&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;pipe&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="n"&gt;pipe&lt;/span&gt;&lt;span class="nf"&gt;.del&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"foo"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.ignore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_i&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="n"&gt;ITERATIONS&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;pipe&lt;/span&gt;&lt;span class="nf"&gt;.set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"foo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"bar"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.ignore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_i&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="n"&gt;ITERATIONS&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;pipe&lt;/span&gt;&lt;span class="nf"&gt;.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"foo"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.ignore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;pipe&lt;/span&gt;&lt;span class="nf"&gt;.del&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"foo"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.ignore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_i&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="n"&gt;ITERATIONS&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;pipe&lt;/span&gt;&lt;span class="nf"&gt;.incr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"foo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.ignore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;pipe&lt;/span&gt;&lt;span class="nf"&gt;.del&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"foo"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.ignore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_i&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="n"&gt;ITERATIONS&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;pipe&lt;/span&gt;&lt;span class="nf"&gt;.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"foo"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.ignore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pipe&lt;/span&gt;&lt;span class="nf"&gt;.query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;con&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.unwrap&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="nd"&gt;println!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"{}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="nf"&gt;.elapsed&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="nf"&gt;.as_millis&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Crystal
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s2"&gt;"../src/redis"&lt;/span&gt;

&lt;span class="n"&gt;redis&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Redis&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;

&lt;span class="n"&gt;start&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;monotonic&lt;/span&gt;

&lt;span class="n"&gt;iterations&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100_000&lt;/span&gt;
&lt;span class="n"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pipeline&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;redis&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;del&lt;/span&gt; &lt;span class="s2"&gt;"foo"&lt;/span&gt;
  &lt;span class="n"&gt;iterations&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt; &lt;span class="s2"&gt;"foo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"bar"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="n"&gt;iterations&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt; &lt;span class="s2"&gt;"foo"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="n"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;del&lt;/span&gt; &lt;span class="s2"&gt;"foo"&lt;/span&gt;
  &lt;span class="n"&gt;iterations&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;incr&lt;/span&gt; &lt;span class="s2"&gt;"foo"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="n"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;del&lt;/span&gt; &lt;span class="s2"&gt;"foo"&lt;/span&gt;
  &lt;span class="n"&gt;iterations&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt; &lt;span class="s2"&gt;"foo"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;pp&lt;/span&gt; &lt;span class="no"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;monotonic&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that this isn't the &lt;a href="https://github.com/stefanwille/crystal-redis"&gt;more common Crystal Redis shard&lt;/a&gt;. This is a Redis client I wrote that is significantly tuned to reduce heap allocations and remain light while supporting as much of Redis as I needed. &lt;del&gt;I will be publishing it on GitHub soon.&lt;/del&gt; You can find &lt;a href="https://github.com/jgaskins/redis"&gt;the code on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Results
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cargo run --release --example redis_app
    Finished release [optimized] target(s) in 0.30s
     Running `target/release/examples/redis_app`
568
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It took our Rust app 568 milliseconds to connect to Redis, send 400k commands, and receive all their results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ crystal run --release bench/bench_redis.cr
00:00:00.328368151
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our Crystal app took just 328 milliseconds to run the same commands. That means the Rust app took &lt;em&gt;73% more time&lt;/em&gt; to perform the exact same work as the Crystal app.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Caveat
&lt;/h2&gt;

&lt;p&gt;The hard part about benchmarking anything that connects to a server is that the server may actually be your bottleneck. With databases especially, it's easy to get stuck waiting on I/O. In our example apps, the Redis server was indeed capping out at 100% CPU but neither app was, which is why we stop at 400k commands — going beyond that wasn't actually providing any useful information.&lt;/p&gt;

&lt;p&gt;So how can we find just the time our app spent in the CPU and ignore all the time we spent waiting on the server? Turns out the UNIX &lt;code&gt;time&lt;/code&gt; command tells us exactly this. Instead of &lt;code&gt;cargo run&lt;/code&gt; and &lt;code&gt;crystal run&lt;/code&gt;, we'll compile our programs and run them directly through &lt;code&gt;time&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cargo build --release --example redis_app
    Finished release [optimized] target(s) in 0.26s
$ time target/release/examples/redis_app
563
target/release/examples/redis_app  0.28s user 0.04s system 48% cpu 0.656 total
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our Rust app used the CPU for 320ms (280ms in userland and 40ms in system calls).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ crystal build --release bench/bench_redis.cr -o bin/bench_redis
$ time bin/bench_redis
00:00:00.327064055
bin/bench_redis  0.12s user 0.02s system 41% cpu 0.341 total
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our Crystal app used the CPU for 140ms (120ms in userland and 20ms in system calls). That means our Crystal app was 2.29x as fast on the CPU!&lt;/p&gt;

&lt;p&gt;Also, it was interesting seeing both of these programs were waiting on Redis for over half of their runtime! As someone that has worked mostly in Ruby for 16 years, being able to saturate a Redis server with a single client is hilarious to me.&lt;/p&gt;

&lt;h2&gt;
  
  
  The End
&lt;/h2&gt;

&lt;p&gt;The purpose of this post was not to say that Rust is slow. Rust is &lt;em&gt;very&lt;/em&gt; fast. The idea was to see if Rust was really the performance trailblazer we all thought it was and it turns out Crystal has just as good, if not way better, performance for cases like this.&lt;/p&gt;

&lt;p&gt;One thing that strikes me is that you never hear people talk about using Rust and Go for how nice they are to read and write the way you hear people talk about Ruby. It's always about the performance. But somehow we don't hear people talking as much about Crystal for the same reasons. I wonder if it's &lt;em&gt;because&lt;/em&gt; it resembles Ruby that people don't take it seriously. Rust and Go have curly braces everywhere, so they're fast, right? 😄&lt;/p&gt;

&lt;p&gt;Anyway, if you use Ruby or Python for their expressiveness and Rust or Go for their performance, it might be worth writing a part of your app in Crystal to get both.&lt;/p&gt;

</description>
      <category>performance</category>
      <category>rust</category>
      <category>crystal</category>
    </item>
    <item>
      <title>Reasons I've Been Rejected For Software Engineering Roles</title>
      <dc:creator>Jamie Gaskins</dc:creator>
      <pubDate>Tue, 07 Jan 2020 18:37:52 +0000</pubDate>
      <link>https://dev.to/jgaskins/reasons-i-ve-been-rejected-for-software-engineering-roles-5221</link>
      <guid>https://dev.to/jgaskins/reasons-i-ve-been-rejected-for-software-engineering-roles-5221</guid>
      <description>&lt;p&gt;I've been seeing a lot of discussions of tech interviews in my Twitter feed and a lot of it is around the idea that someone needs to be "more technical" (that's not a thing, by the way — technical expertise isn't linear) or that they need to be able to balance a binary search tree or whatever from memory when the job never requires it.&lt;/p&gt;

&lt;p&gt;The typical interview process at tech companies is almost worse than useless, so I wanted to share some of the comically awful reasons I've been rejected by companies.&lt;/p&gt;




&lt;p&gt;One company rejected me because they didn't think I would be able to jive at their scale. There are two weird things about this. The first is that they asked &lt;em&gt;zero&lt;/em&gt; questions about my experience with scale during the interview.&lt;/p&gt;

&lt;p&gt;The second is that this team only handled referrals — so really only putting people through their customer-acquisition funnel. Unless this company was somehow managing to gain tens of thousands of new customers per second, it's unlikely that scale was actually a factor. That's the entire population of the US in a matter of hours, though, so probably not. Even if they were going with their company's total size, I've worked at companies several times their size with several times more traffic, data, and engineers.&lt;/p&gt;

&lt;p&gt;Scale is an ego point for some developers. It's some "get on my level" shit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fb2u3lwf29g47nlopxft3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fb2u3lwf29g47nlopxft3.png" alt="gEt oN mY levEL"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But it's the kind of vanity metric that actively discourages optimizations and encourages poor scaling practices, so it requires a lot of discussion to understand someone's familiarity with it.&lt;/p&gt;




&lt;p&gt;Another company rejected me because they didn't think I would enjoy working at a large company. I can't even begin to understand how they came to this as a hiring decision. Why would I apply if company size was a dealbreaker? Why are they trying to decide what I'd like? 😂&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Feqole61259x8je1t6xj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Feqole61259x8je1t6xj0.png" alt="Can you not?"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Another company rejected me because I struggled to find a given bug in the &lt;code&gt;sass&lt;/code&gt; Ruby gem during "the debugging portion" of their technical interview — this was an on-site interview that lasted all day.&lt;/p&gt;

&lt;p&gt;Apart from the fact that this Ruby gem contains 13k lines of code (not counting whitespace and comments), it's a parser and working with parsers isn't like working with typical apps. And in Ruby, parsers make heavy use of metaprogramming, making it that much more difficult to find a bug when you're unfamiliar with the code because you can't &lt;code&gt;grep&lt;/code&gt; for what you need.&lt;/p&gt;

&lt;p&gt;I asked the interviewer if the role required implementing parsers in Ruby, he said "probably not". So it was a pointless exercise, but they actually told me that my inability to locate and fix this bug in a 13k-LoC gem in 45 minutes was their reason for rejecting me.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fa4zukfq9etxht18zzxuu.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fa4zukfq9etxht18zzxuu.jpeg" alt="Everybody is a genius, but if you judge a fish by its ability to climb a tree, it will live its whole life believing that it's stupid"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;During another interview, the interviewer asked very vague questions. Every time I asked for clarification on even the &lt;em&gt;kind&lt;/em&gt; of response he was looking for, he just dumbed down the question. And he did it in a way that let me know I was losing points for getting "hints".&lt;/p&gt;

&lt;p&gt;He did the same when I stopped asking him to clarify and started guessing at what he wanted. Then later, he asked questions I &lt;em&gt;gave him answers for earlier&lt;/em&gt; when I was trying to guess at what he meant.&lt;/p&gt;

&lt;p&gt;I remember standing in front of that whiteboard thinking "they're 100% gonna reject me because this guy can't ask the questions he wants the answers for". I mean, at least I saw the rejection coming.&lt;/p&gt;

&lt;p&gt;Even so, this one was probably the hardest rejection because the engineering manager was excited about me and even introduced me to her team because she expected a positive outcome.&lt;/p&gt;




&lt;p&gt;At another company, they asked me about my own strengths and weaknesses. This is a pretty pointless question in general but the rejection letter said "we're looking for engineers with a different set of strengths".&lt;/p&gt;

&lt;p&gt;This particular interview was a shit show from start to finish, but this was my favorite bit.&lt;/p&gt;




&lt;p&gt;None of these are small startups run by no-name techbros, either. These are well respected companies. If you're in the tech industry, you've probably heard of most of them, if not every single one of them, and you've almost definitely used some of their products and services.&lt;/p&gt;

&lt;p&gt;And yet, no matter how good they are at providing snacks in their "kick-ass office in downtown San Francisco" or &lt;a href="https://youtu.be/VBwWbFpkltg?t=2817" rel="noopener noreferrer"&gt;"unleashing the world's creative energy by designing a more enlightened way of working"&lt;/a&gt;, somehow they're all shit at interviewing.&lt;/p&gt;

&lt;p&gt;Sometimes interviewers will find arbitrary reasons to reject you that have nothing to do with you. Maybe the interviewer doesn't communicate well. Maybe they have a superiority complex and are rejecting you just to exercise that power. Maybe they just had a bad day.&lt;/p&gt;

&lt;p&gt;For these interviews, there may actually be no way to pass it. You could probably do everything "right" and the interviewer would still find some excuse to shut you down. And a lot of the time, the team may actually be amazing but the interviewer is someone that works on a completely unrelated team who would never see you, anyway.&lt;/p&gt;

&lt;p&gt;I've been working in this industry for 15 years. Interviewing is still every bit as terrible now as it was when I started. We're all being judged on the wrong criteria. Even when we get the job, it may not be for the right reason.&lt;/p&gt;

</description>
      <category>interviewing</category>
    </item>
    <item>
      <title>Enabling Crystal’s New Multicore Support</title>
      <dc:creator>Jamie Gaskins</dc:creator>
      <pubDate>Wed, 25 Sep 2019 04:19:31 +0000</pubDate>
      <link>https://dev.to/jgaskins/enabling-crystal-s-new-multicore-support-4l4g</link>
      <guid>https://dev.to/jgaskins/enabling-crystal-s-new-multicore-support-4l4g</guid>
      <description>&lt;p&gt;Crystal is a statically typed, object-oriented programming language with syntax heavily inspired by Ruby and concepts inspired by quite a few languages, including Ruby, Go, Rust, Swift, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concurrency
&lt;/h2&gt;

&lt;p&gt;One of the nice things about Crystal is its concurrency model. Rather than managing threads, you spin up &lt;a href="https://crystal-lang.org/reference/guides/concurrency.html" rel="noopener noreferrer"&gt;fibers&lt;/a&gt; using the &lt;code&gt;spawn&lt;/code&gt; method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="n"&gt;spawn&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"This runs when the program’s main fiber is sleeping or waiting on I/O"&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is powerful. Threads require a lot of hands-on management — if you create them, you have to collect them yourself — but fibers are managed by the garbage collector, so you can fire and forget if you like.&lt;/p&gt;

&lt;p&gt;Fibers are also significantly lighter weight. You can use them to spin off background work so you can do things like send off 50 web requests &lt;em&gt;concurrently&lt;/em&gt; rather than one at a time. You could do this with threads, but &lt;code&gt;pthread_create&lt;/code&gt; and &lt;code&gt;pthread_join&lt;/code&gt; (the &lt;code&gt;libc&lt;/code&gt; functions that usually back threads) are &lt;em&gt;expensive&lt;/em&gt; system calls, so you really shouldn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concurrency != Parallelism
&lt;/h2&gt;

&lt;p&gt;The downside of fibers has historically been that all Crystal fibers execute on the main thread. This lets you do a lot of I/O-bound work (HTTP requests, database queries, reading files from disk, etc), however CPU-bound work (serializing/deserializing JSON, crunching numbers for reports, etc) was limited to a single CPU core. Ruby, Python, and JavaScript all have this same limitation.&lt;/p&gt;

&lt;p&gt;Yesterday, however, the Crystal team &lt;a href="https://crystal-lang.org/2019/09/23/crystal-0.31.0-released.html" rel="noopener noreferrer"&gt;released version 0.31.0&lt;/a&gt;, which comes with multicore support! This allows us to do not only &lt;em&gt;concurrent&lt;/em&gt; work like we had before, but also true &lt;em&gt;parallel&lt;/em&gt; work — we can split our workload across as many CPU cores as our machine has. For example, here is a single Crystal process saturating a 32-core DigitalOcean droplet:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Faws1.discourse-cdn.com%2Fstandard10%2Fuploads%2Fcrystal_lang%2Foptimized%2F1X%2Fbaec43d0b09fd5c6d630af0e659230ae53a95c9d_2_1380x386.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Faws1.discourse-cdn.com%2Fstandard10%2Fuploads%2Fcrystal_lang%2Foptimized%2F1X%2Fbaec43d0b09fd5c6d630af0e659230ae53a95c9d_2_1380x386.png" alt="32-core server running  raw `htop` endraw  with all cores at 100%"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code for this app was simply:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;spawn&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="kp"&gt;loop&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="c1"&gt;# Keep the main thread from exiting&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Enabling Parallelism
&lt;/h2&gt;

&lt;p&gt;Crystal's multicore support is still in preview, so it's off by default. You can enable it with the &lt;code&gt;-D preview_mt&lt;/code&gt; compiler flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;crystal build -Dpreview_mt my_app.cr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or you can run it directly without creating build artifacts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;crystal run -Dpreview_mt my_app.cr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The best part, in my opinion, is that the level of parallelism (the number of fibers that can execute in parallel) is set up during app bootstrapping before your own application code actually runs by setting the &lt;code&gt;CRYSTAL_WORKERS&lt;/code&gt; environment variable. This means you can tune the amount of CPU resources your app will try to use — and you can do it when the app &lt;em&gt;starts&lt;/em&gt; rather than during the build process. So if you're running two different Crystal apps on the same 16-core server, they won't both be trying to use all 16 cores. You can assign 4 cores to one of them and 12 to the other:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CRYSTAL_WORKERS=4 first_app
CRYSTAL_WORKERS=12 second_app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How Does It Work?
&lt;/h2&gt;

&lt;p&gt;Parallelism is configured while your app is bootstrapping — that is, wiring up all the parts it needs before it can begin executing your application code. That parallelism is achieved through a static thread pool. No threads are spun up or down while your app is running, all the threads are created by the time your first line of application code executes!&lt;/p&gt;

&lt;p&gt;Each thread in this pool comes with its own fiber scheduler. It's basically doing what it does in single-thread mode, just across more threads. This means that a fiber currently runs only within the thread it's initially assigned to. This isn't necessarily the thread that created it, though. For example, if Thread 1 calls &lt;code&gt;spawn&lt;/code&gt;, that new fiber may be assigned to Thread 4 and it will live on Thread 4 until it dies.&lt;/p&gt;

&lt;p&gt;The Crystal team have discussed implementing "fiber stealing" (basically, if Thread 1 has nothing to do and Thread 2 has a lot of fibers, Thread 1 might take some of those fibers to spread around the work), but I have a feeling that's a ways off.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Can I Do With This?
&lt;/h2&gt;

&lt;p&gt;Anything that can benefit from concurrent work will automatically be parallelized. For example, web apps often use &lt;code&gt;HTTP::Server&lt;/code&gt; from the Crystal standard library — either directly or through a framework such as &lt;a href="https://amberframework.org" rel="noopener noreferrer"&gt;Amber&lt;/a&gt; or &lt;a href="https://www.luckyframework.org" rel="noopener noreferrer"&gt;Lucky&lt;/a&gt;. This class spins off every request handler in its own fiber. With the &lt;code&gt;preview_mt&lt;/code&gt; flag enabled, this now spreads across CPU cores!&lt;/p&gt;

&lt;p&gt;Background jobs like Sidekiq (yes, &lt;a href="https://github.com/mperham/sidekiq.cr" rel="noopener noreferrer"&gt;Sidekiq has been ported to Crystal&lt;/a&gt; by its author) perform each job in its own fiber. You can use the &lt;a href="https://github.com/cloudamqp/amqp-client.cr" rel="noopener noreferrer"&gt;RabbitMQ client&lt;/a&gt; and spawn a fiber for each incoming message.&lt;/p&gt;

&lt;p&gt;Or you can even split up work that you might otherwise do in parallel. Let's say you have the following code that iterates over an array and processes each one serially:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;process&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code is very simple, but if it takes a long time to process, it might be worth splitting individual parts across all of your CPU cores. To achieve this this, we can convert it into a &lt;a href="https://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem" rel="noopener noreferrer"&gt;producer/consumer&lt;/a&gt; setup where the producer spins up a fiber in which results are computed and passed through a &lt;code&gt;Channel&lt;/code&gt; for the consumer to receive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="n"&gt;channel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Channel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;MyValue&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;spawn&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt; &lt;span class="n"&gt;process&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Array&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;receive&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Queues are often the solution to a producer/consumer problem and we're using a &lt;code&gt;Channel&lt;/code&gt; as that queue. They're built into the standard library (and also used within Crystal itself) so we can count on them being there without installing additional dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveats
&lt;/h2&gt;

&lt;p&gt;Unfortunately, multithreading there may be some libraries that aren't threadsafe yet. That's okay, fixing these issues is frequently a matter of wrapping mutexes around state changes to make them atomic (like you might with a database transaction), so it's a fantastic opportunity to make a contribution to an open-source library.&lt;/p&gt;

&lt;p&gt;If you're unsure how to make this work for your application, you can always come by the &lt;a href="https://gitter.im/crystal-lang/crystal" rel="noopener noreferrer"&gt;Crystal Gitter channel&lt;/a&gt; or post on the &lt;a href="https://forum.crystal-lang.org/" rel="noopener noreferrer"&gt;Crystal forums&lt;/a&gt; or &lt;a href="https://www.reddit.com/r/crystal_programming/" rel="noopener noreferrer"&gt;subreddit&lt;/a&gt;. The community is very helpful and we're all excited about multicore support coming to the Crystal ecosystem, so feel free to ask any questions you may have!&lt;/p&gt;

</description>
      <category>crystal</category>
      <category>performance</category>
      <category>concurrency</category>
      <category>parallelism</category>
    </item>
    <item>
      <title>Tricksy Little Postgres Query Planner</title>
      <dc:creator>Jamie Gaskins</dc:creator>
      <pubDate>Thu, 19 Sep 2019 00:10:24 +0000</pubDate>
      <link>https://dev.to/jgaskins/tricksy-little-postgres-query-planner-42hg</link>
      <guid>https://dev.to/jgaskins/tricksy-little-postgres-query-planner-42hg</guid>
      <description>&lt;p&gt;We've had this query at work that has been irritatingly slow for a while, about 5-10 seconds during the course of a pretty common HTTP request. So far, nobody's been able to speed it up much. We ran &lt;code&gt;EXPLAIN ANALYZE&lt;/code&gt; on the query and it seemed okay.&lt;/p&gt;

&lt;p&gt;If you've never looked at an &lt;code&gt;EXPLAIN ANALYZE&lt;/code&gt; result, they look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                                                                                    QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=2.93..360.99 rows=50 width=1205) (actual time=42.779..42.779 rows=0 loops=1)
   -&amp;gt;  Hash Left Join  (cost=2.93..1879.13 rows=262 width=1205) (actual time=42.778..42.778 rows=0 loops=1)
         Hash Cond: (table1.id = table2.table1_id)
         Filter: (((table2.table3_id = 712581) AND table1.flag) OR ((table1.table4_id = 712581) AND table1.other_flag))
         Rows Removed by Filter: 24211
         -&amp;gt;  Seq Scan on table1  (cost=0.00..1784.11 rows=24211 width=1205) (actual time=0.023..17.043 rows=24211 loops=1)
         -&amp;gt;  Hash  (cost=1.86..1.86 rows=86 width=8) (actual time=0.066..0.066 rows=86 loops=1)
               Buckets: 1024  Batches: 1  Memory Usage: 12kB
               -&amp;gt;  Seq Scan on table2  (cost=0.00..1.86 rows=86 width=8) (actual time=0.012..0.029 rows=86 loops=1)
 Planning Time: 72.257 ms
 Execution Time: 43.034 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The important parts of this are things like &lt;code&gt;Hash Left Join&lt;/code&gt;, &lt;code&gt;Seq Scan&lt;/code&gt;, and &lt;code&gt;Hash&lt;/code&gt;. Another common one you'll see is &lt;code&gt;Index Scan&lt;/code&gt;. It roughly translates to this sort of time complexity:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Plan&lt;/th&gt;
&lt;th&gt;Complexity&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Hash&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://en.wikipedia.org/wiki/Time_complexity#Constant_time"&gt;O(1)&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Backed by an in-memory hash table&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Index Scan&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://en.wikipedia.org/wiki/Time_complexity#Logarithmic_time"&gt;O(log n)&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Backed by an on-disk binary tree&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Seq Scan&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://en.wikipedia.org/wiki/Time_complexity#Linear_time"&gt;O(n)&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Just iterates over rows in the table like you would over elements of any list&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Nested Loop&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://en.wikipedia.org/wiki/Time_complexity#Polynomial_time"&gt;O(n * m)&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Iterating over the rows of one set for each row of another&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The query plan above is an actual query plan from our DB (tables and columns are scrubbed). It's using a &lt;code&gt;Hash Left Join&lt;/code&gt; and a &lt;code&gt;Hash&lt;/code&gt;, so it should be pretty fast! It's using a &lt;code&gt;Seq Scan&lt;/code&gt; in there, too, but it's happening to a set that's already been filtered — this is actually pretty common.&lt;/p&gt;

&lt;p&gt;So if this query was on such a fast plan, why was it so slow? Today I decided to run the &lt;code&gt;EXPLAIN&lt;/code&gt; against our production database and realized why:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                                                                                   QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.29..907487.63 rows=50 width=1936) (actual time=54.223..8893.723 rows=1 loops=1)
   -&amp;gt;  Nested Loop Left Join  (cost=0.29..1161584.09 rows=64 width=1936) (actual time=54.221..8893.721 rows=1 loops=1)
         Filter: (((table2.table3_id = 712581) AND table1.flag) OR ((table1.table4_id = 712581) AND table1.other_flag))
         Rows Removed by Filter: 2095442
         -&amp;gt;  Seq Scan on table1  (cost=0.00..477337.61 rows=2098223 width=1936) (actual time=0.021..4430.654 rows=2095443 loops=1)
               Filter: (flag OR ((table4_id = 712581) AND other_flag))
               Rows Removed by Filter: 392143
         -&amp;gt;  Index Scan using index_1 on table3  (cost=0.29..0.31 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=2095443)
               Index Cond: (table1.id = table1_id)
 Planning time: 4.166 ms
 Execution time: 8893.867 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's using very different data structures. Not a single &lt;code&gt;Hash&lt;/code&gt; in sight. Our &lt;code&gt;LEFT JOIN&lt;/code&gt;, instead of backed by a &lt;code&gt;Hash&lt;/code&gt;, is now running as a &lt;code&gt;Nested Loop&lt;/code&gt;. That's &lt;em&gt;rough&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;It's most likely not joining using the &lt;code&gt;Hash&lt;/code&gt; method anymore because an in-memory hash for tables with millions of rows in them is probably infeasible to run for a single query. Not because it can't do it for a single query, but rather that it can't do it for all queries concurrently.&lt;/p&gt;

&lt;p&gt;With that in mind, it makes sense why it's using a slower query plan. It just didn't occur to us at the time that it would use a different one.&lt;/p&gt;

&lt;p&gt;This is why, whenever you want to understand the performance of a query, you need to do it on your production database. Your development or staging data set will not give you the insight you need.&lt;/p&gt;

&lt;p&gt;This is not always tenable since some companies may have very strict policies on who can access the production database, but it's important to have tooling around this to allow profiling of queries against real-world data.&lt;/p&gt;

&lt;h3&gt;
  
  
  How we fixed it
&lt;/h3&gt;

&lt;p&gt;You're probably curious about our solution. What we landed on (after a few iterations) involved removing the &lt;code&gt;LEFT JOIN&lt;/code&gt; altogether because we couldn't get it to run as anything but &lt;code&gt;Nested Loop&lt;/code&gt;, which was about 99% of the query runtime. Instead, because of how we were using &lt;code&gt;OR&lt;/code&gt;, we opted to run a &lt;code&gt;UNION&lt;/code&gt; query that looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;table1&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;table1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;table3_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;123456&lt;/span&gt;
&lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;flag&lt;/span&gt;

&lt;span class="k"&gt;UNION&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;table1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;table1&lt;/span&gt;
&lt;span class="k"&gt;INNER&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;table2&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;table1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;table2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;table1_id&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;table2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;table3_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;123456&lt;/span&gt;
&lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;other_flag&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The actual query is slightly more involved because it's generated by an ORM but it's pretty equivalent to this. The query plan is almost identical, even.&lt;/p&gt;

&lt;p&gt;The first part of the query is why we were doing the &lt;code&gt;LEFT JOIN&lt;/code&gt; to begin with. Since related rows in the other table were only guaranteed if &lt;code&gt;other_flag&lt;/code&gt; was set, we couldn't use &lt;code&gt;INNER JOIN&lt;/code&gt; for the whole thing.&lt;/p&gt;

&lt;p&gt;This brought our query time, which was anywhere from 7-10 seconds depending on database load, down to a pretty consistent 125µs, a reduction in query latency of 99.9982-99.99875%. I actually had to check to make sure we were even hitting the right database because that number didn't look &lt;em&gt;anything&lt;/em&gt; like what I expected. But sure enough, this does exactly what we needed it to do and it lets Postgres use 4 indexes (previously it was only using 1) and probably makes &lt;em&gt;really&lt;/em&gt; nice use of the query cache.&lt;/p&gt;

&lt;p&gt;Not bad for a day's work!&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>performance</category>
      <category>optimization</category>
    </item>
    <item>
      <title>FaaStRuby: Serverless Functions as a Service for Ruby and Crystal</title>
      <dc:creator>Jamie Gaskins</dc:creator>
      <pubDate>Tue, 29 Jan 2019 15:01:25 +0000</pubDate>
      <link>https://dev.to/jgaskins/faastruby-serverless-functions-as-a-service-for-ruby-and-crystal-2849</link>
      <guid>https://dev.to/jgaskins/faastruby-serverless-functions-as-a-service-for-ruby-and-crystal-2849</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7cz65RvS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/p4vvxvj04x1jwoanh35q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7cz65RvS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/p4vvxvj04x1jwoanh35q.png" alt="FaaStRuby Logo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The "serverless" movement in web development has picked up a lot of steam in recent months. AWS Lambda now supports 6 languages and late last year announced support for Ruby:&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--O2wwXPlt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1024764646207545345/ZulYKrfQ_normal.jpg" alt="Alex Wood profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Alex Wood
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @alexwwood
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      I’m excited to finally announce what I’ve been working on the last few months. Ruby support for AWS Lambda is officially here!
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      17:47 PM - 29 Nov 2018
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1068199855237910528" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1068199855237910528" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1068199855237910528" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;They also support custom runtimes so you can run anything you like. I tried it out recently and got a Crystal function deployed, but it was difficult to say the least.&lt;/p&gt;

&lt;p&gt;First, I had to run it through the Serverless framework. If you're unfamiliar with it (as I was), you have to run it on Linux. It also requires a whole lot of integration with AWS, including CloudFormation, IAM, and all kinds of other services on AWS. Because of this, their &lt;a href="https://www.youtube.com/watch?v=KngM5bfpttA"&gt;intro video&lt;/a&gt; instructs you to give it administrator privileges. When you're just starting out and you have nothing on your AWS account, this isn't such a big deal, but this is horrible advice for your a company account with existing infrastructure. Never give any tool you don't control all the keys to the kingdom.&lt;/p&gt;

&lt;p&gt;In the end, partly because I refused to give it full admin privileges, it took me about 4 hours to deploy a single Crystal function to AWS Lambda — CloudFormation rollbacks take &lt;em&gt;foreeeeeever&lt;/em&gt; when it doesn't succeed due to insufficient privileges and I couldn't find anything to point out exactly what privileges it needs. If you already use Lambda and Serverless or you have an Operations or Infrastructure team that understands these things, I'm sure this would go much more smoothly, but I didn't have that for this particular task. I was starting it from scratch.&lt;/p&gt;

&lt;p&gt;Then I found FaaStRuby which &lt;a href="https://faastruby.io/blog/faastruby-0-4-adds-support-for-ruby-2-6-0-and-crystal-0-27-0/"&gt;just announced support for Crystal&lt;/a&gt;. Within 10 minutes I went from nothing to having a Crystal function available at an API endpoint.&lt;/p&gt;

&lt;p&gt;Getting FaaStRuby up and running is pretty quick:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;gem install faastruby&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;faastruby create-workspace your-workspace-name&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;faastruby new your-function-name --runtime crystal:0.27.0&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;For Ruby functions, you can omit the &lt;code&gt;--runtime&lt;/code&gt; flag since it's the default&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;code&gt;cd your-function-name&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Replace the code in the generated &lt;code&gt;src/handler.cr&lt;/code&gt; with your own code

&lt;ul&gt;
&lt;li&gt;You may want to write some real specs or delete the generated ones. The ones they generate will likely fail and FaaStRuby doesn't deploy unless specs pass — this is a good thing.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;code&gt;faastruby deploy-to your-workspace-name&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's all it takes to get going — no registration forms, no payment details to mess with, just install a Ruby gem, run some CLI commands, write a couple lines of code, and hit your function at &lt;a href="https://api.tor1.faastruby.io/your-workspace-name/your-function-name"&gt;https://api.tor1.faastruby.io/your-workspace-name/your-function-name&lt;/a&gt;. It's pretty slick!&lt;/p&gt;

&lt;p&gt;It still seems to be an early-stage service, so it doesn't have all the nice things that AWS Lambda has, but if all you're trying to do is deploy a cloud function quickly in Ruby or Crystal, it'll do what you need without all the ceremony that comes with using AWS.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>crystal</category>
      <category>serverless</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Procs vs Callables in Ruby</title>
      <dc:creator>Jamie Gaskins</dc:creator>
      <pubDate>Wed, 05 Sep 2018 03:57:08 +0000</pubDate>
      <link>https://dev.to/jgaskins/procs-vs-callables-in-ruby-1ff1</link>
      <guid>https://dev.to/jgaskins/procs-vs-callables-in-ruby-1ff1</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="http://jgaskins.org/blog/2018/09/04/procs-vs-callables-in-ruby"&gt;jgaskins.org&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I've been playing with RabbitMQ recently, comparing it to our current use of SNS+SQS as our message bus at work. One of the nice things about it is that, with the &lt;code&gt;bunny&lt;/code&gt; gem, you subscribe to messages from a queue by passing a block telling what to do with that message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exchange&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;subscribe&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;delivery&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;do_things_with&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It started me down a rabbit hole of "how much performance does this need?" so I could figure out whether this should run in its own process. That's when I started looking too closely and checking out how we could maximize performance.&lt;/p&gt;

&lt;p&gt;I wanted to understand the performance of the gem, especially since consumers of message queues should be fast and have minimal overhead, so I opened up the code and found that when that block is called, it's called &lt;a href="https://github.com/ruby-amqp/bunny/blob/2f5b35530c6415b9f0e061586b09755ba0ae1e74/lib/bunny/consumer.rb#L55-L57"&gt;with splat args&lt;/a&gt;, which then calls the block by splatting the same args.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE&lt;/em&gt;: This is not a criticism of the &lt;code&gt;bunny&lt;/code&gt; gem, splat args, or anything. This was simply an exploration of the performance characteristics of the pattern of taking a block and calling that block later, along with a few variations of that pattern. These are all common conventions in Ruby and I think it's useful to understand how well they perform.&lt;/p&gt;

&lt;p&gt;The first thing I wondered was what the performance cost of calling procs was vs calling a &lt;a href="http://blog.jayfields.com/2007/10/ruby-poro.html"&gt;PORO&lt;/a&gt;'s &lt;code&gt;call&lt;/code&gt; method — that is, a &lt;code&gt;call&lt;/code&gt;able object.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assumptions and Hypotheses
&lt;/h2&gt;

&lt;p&gt;I had a feeling that procs would be slower. I didn't have anything on which to base that assumption, but Ruby implementations are very much optimized around the idea of sending messages to objects and procs aren't run-of-the-mill objects — they're basically a Ruby binding to some bytecode. I don't know how heavy those bindings are, but given that you can get all kinds of introspection out of them (including local variables), I assumed they'd be pretty heavy. So I'm assuming a lot here.&lt;/p&gt;

&lt;p&gt;Something that was less of an assumption but more of a hypothesis was that splat-args would be slower than explicit arguments. Splat args have to allocate and populate an array, so there's a performance cost to them. Still, I wasn't completely certain of it, so it was at best a hypothesis.&lt;/p&gt;

&lt;p&gt;Speculation about performance without benchmarks is a waste of time, so I &lt;a href="https://gist.github.com/jgaskins/06d50b7723090c60e64b5af436fb6436"&gt;wrote some&lt;/a&gt;, including calling both with splat args. Turns out my guesses were pretty close (click the link to see the benchmark code):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Comparison:
     callable no arg: 10095848.2 i/s
   callable with arg:  9777103.9 i/s - same-ish: difference falls within error
     callable 3 args:  9460308.0 i/s - same-ish: difference falls within error
callable splat args (0):  6773190.5 i/s - 1.49x  slower
         proc no arg:  6747397.4 i/s - 1.50x  slower
       proc with arg:  6663572.5 i/s - 1.52x  slower
         proc 3 args:  6454715.5 i/s - 1.56x  slower
callable splat args (1):  5099903.4 i/s - 1.98x  slower
 proc splat args (0):  5028088.6 i/s - 2.01x  slower
callable splat args (3):  4880320.0 i/s - 2.07x  slower
 proc splat args (1):  4091623.1 i/s - 2.47x  slower
 proc splat args (3):  4005997.8 i/s - 2.52x  slower
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This was disappointing for 2 reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Proving yourself correct teaches you very little; proving yourself wrong teaches you a &lt;em&gt;lot&lt;/em&gt;. At best, I proved a bunch of mildly educated assumptions correct.&lt;/li&gt;
&lt;li&gt;Capturing and later calling blocks is such a common practice in Ruby that I wonder how much performance we're losing as a result&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;On the bright side, I'd gone down enough rabbit holes to find this out. If I'd been wrong, I'd have gone down even more to understand why.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Do?
&lt;/h2&gt;

&lt;p&gt;It would be silly to say "never capture blocks because performance". Capturing blocks in Ruby might be a bit slower, but it's a powerfully expressive concept and it's unlikely that the difference in performance will make that much of an impact in your app — I was still getting 6.7 &lt;em&gt;million&lt;/em&gt; calls per second with a proc. If you need to call a captured block on the order of millions of times per second, you'll probably benefit from this article. Otherwise, this is largely an academic exercise and that's okay, too.&lt;/p&gt;

&lt;p&gt;If you want to optimize performance while still allowing block capture, you can do both by taking a callable &lt;em&gt;or&lt;/em&gt; a block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ThingThatHasEvents&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;on&lt;/span&gt; &lt;span class="n"&gt;event_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="kp"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;block&lt;/span&gt;
    &lt;span class="vi"&gt;@events&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;event_name&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="n"&gt;block&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll want to have a check in there to ensure you receive one or the other, but making affordances for passing either one will give you the expressive API of receiving a block while still accepting the faster path of callable objects. With a typical "event handler" style where the event is emitted with the call to each handler, we can see this goes &lt;a href="https://gist.github.com/jgaskins/e493f7f87f05e9c96cfcb289d0c0fa12"&gt;up to 45% faster&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Unfortunately, the benchmark shows that a heterogenous set of event handlers (some passed as blocks, some passed as callable POROs) is actually slower than procs-only, but only by about 10% — much less than the difference between procs and callables separately.&lt;/p&gt;

&lt;h2&gt;
  
  
  Always Benchmark
&lt;/h2&gt;

&lt;p&gt;I may have been right about this, but performance claims without benchmarks are always bullshit. Always benchmark.&lt;/p&gt;

&lt;p&gt;Even if you've done something similar before.&lt;br&gt;
Even if you've done the exact same thing before in a different app.&lt;br&gt;
Even if you've done the exact same thing before in the same app on a different Ruby VM.&lt;/p&gt;

&lt;p&gt;I'll likely put in a PR to the &lt;code&gt;bunny&lt;/code&gt; gem to see if we can remove the splat-args and allow subscribing with a non-&lt;code&gt;Proc&lt;/code&gt; PORO. In the meantime, the current implementation provides enough performance for our needs.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>proc</category>
      <category>callable</category>
      <category>poro</category>
    </item>
    <item>
      <title>Fast Type-Checked Serializers for Ruby Web APIs</title>
      <dc:creator>Jamie Gaskins</dc:creator>
      <pubDate>Sun, 11 Feb 2018 07:52:15 +0000</pubDate>
      <link>https://dev.to/jgaskins/fast-type-checked-serializers-for-ruby-web-apis-2o19</link>
      <guid>https://dev.to/jgaskins/fast-type-checked-serializers-for-ruby-web-apis-2o19</guid>
      <description>&lt;p&gt;When you need two different processes to be able to talk to each other, they need to be able to connect and speak the same language. If one of them is a web app, you're using HTTP (and everything it's built on top of), so that covers most of it. If you're sending requests and receiving responses serialized as JSON, you're even closer. But we still need to figure out that last mile — what JSON objects are we going to send and receive?&lt;/p&gt;

&lt;p&gt;If our web server is running on Ruby, we just need to get our models converted into hashes and collections converted into arrays. Conversion to JSON is a simple &lt;code&gt;to_json&lt;/code&gt; call at that point. How we convert them into those hashes and arrays is the choice to be made.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conversion Options
&lt;/h2&gt;

&lt;p&gt;There are plenty of gems available to convert arbitrary Ruby objects into our JSON-serializable format. The most well-known of these is &lt;a href="https://github.com/rails-api/active_model_serializers"&gt;&lt;code&gt;active_model_serializers&lt;/code&gt;&lt;/a&gt;, but a couple weeks ago Netflix announced &lt;a href="https://medium.com/netflix-techblog/fast-json-api-serialization-with-ruby-on-rails-7c06578ad17f"&gt;their own&lt;/a&gt; serialization gem called &lt;code&gt;fast_jsonapi&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;With both gems, you declare your serializers like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;OrderSerializer&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ActiveModel&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Serializer&lt;/span&gt;
  &lt;span class="n"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="ss"&gt;:id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="ss"&gt;:customer_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="ss"&gt;:customer_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="ss"&gt;:line_item_ids&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="ss"&gt;:delivery_address&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="ss"&gt;:delivery_instructions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="ss"&gt;:total_price&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Downside of Those Options
&lt;/h2&gt;

&lt;p&gt;Both of those gems are great. You can tell by looking at this serializer exactly what fields it'll be sending. The thing they're missing is validation of the types of the data they are emitting. Is &lt;code&gt;delivery_address&lt;/code&gt; a string or is it broken out into a hash with each address part? Which of these fields can be &lt;code&gt;nil&lt;/code&gt;? Is &lt;code&gt;total_price&lt;/code&gt; sent as a string to be used presentationally, a floating-point number for calculation, or an integer number of &lt;code&gt;cents&lt;/code&gt; to avoid floating-point error? These are impossible to tell by looking at the attribute declaration in the serializer.&lt;/p&gt;

&lt;p&gt;You might be thinking "why do I want type checking in a dynamic language?!" While I can see the benefits of static typing, I really do enjoy the freedom that dynamic typing gives us to build applications. However, when you have two different applications that must both agree on how they speak to each other, you need to be 100% sure about all of the details, just like getting contract details ironed out in a consulting agreement. Adding unit tests to verify that a serializer emits the right data types is a lot of work. What if the serializers did that for us and would even let us know if our data was incorrect in production?&lt;/p&gt;

&lt;p&gt;Well, I wouldn't be writing this if I didn't already have a solution for that. Enter &lt;a href="https://github.com/jgaskins/primalize"&gt;&lt;code&gt;primalize&lt;/code&gt;&lt;/a&gt;, a serialization gem that comes with type checking out of the box. Primalize is so named because it converts more advanced objects into their primitive counterparts, but it does it in an intelligent way.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;Let's start by declaring a serializer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;OrderSerializer&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Primalize&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Single&lt;/span&gt;
  &lt;span class="n"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="ss"&gt;id: &lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# UUIDs for primary keys are great&lt;/span&gt;
    &lt;span class="ss"&gt;customer_name: &lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="ss"&gt;customer_id: &lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="ss"&gt;line_item_ids: &lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="ss"&gt;delivery_address: &lt;/span&gt;&lt;span class="n"&gt;primalize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;AddressSerializer&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="ss"&gt;delivery_instructions: &lt;/span&gt;&lt;span class="n"&gt;optional&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="ss"&gt;total_price: &lt;/span&gt;&lt;span class="n"&gt;integer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the attributes we declare all have types. We say the &lt;code&gt;id&lt;/code&gt; is a &lt;code&gt;string&lt;/code&gt;, the &lt;code&gt;line_item_ids&lt;/code&gt; is an &lt;code&gt;array&lt;/code&gt; containing &lt;code&gt;string&lt;/code&gt; values. The &lt;code&gt;total_price&lt;/code&gt; is an &lt;code&gt;integer&lt;/code&gt;. These are pretty easy to understand and they tell us exactly what we're getting.&lt;/p&gt;

&lt;p&gt;Notice the &lt;code&gt;delivery_instructions&lt;/code&gt; attribute is marked as &lt;code&gt;optional&lt;/code&gt;. This means it can be &lt;code&gt;nil&lt;/code&gt;. If an attribute isn't marked as &lt;code&gt;optional&lt;/code&gt;, it can't be &lt;code&gt;nil&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here's a list of the various types that &lt;code&gt;Primalize&lt;/code&gt; supports for model serializers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;integer&lt;/code&gt;: whole numbers&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;float&lt;/code&gt;: floating-point numbers&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;number&lt;/code&gt;: any numeric value&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;string&lt;/code&gt;: text&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;boolean&lt;/code&gt;: explicitly &lt;code&gt;true&lt;/code&gt; or &lt;code&gt;false&lt;/code&gt; (not "truthy" or "falsy" values)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;array(*types)&lt;/code&gt;: an array containing values of the specified types

&lt;ul&gt;
&lt;li&gt;Example: &lt;code&gt;array(string, integer)&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;optional(*types)&lt;/code&gt;: any of the specified types or &lt;code&gt;nil&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;Example: &lt;code&gt;optional(string)&lt;/code&gt;, both &lt;code&gt;"foo"&lt;/code&gt; and &lt;code&gt;nil&lt;/code&gt; are acceptable values&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;enum(*values)&lt;/code&gt;: must be one of the specified values

&lt;ul&gt;
&lt;li&gt;Example: &lt;code&gt;enum('requested', 'shipped', 'delivered')&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;timestamp&lt;/code&gt;: a &lt;code&gt;Date&lt;/code&gt;, &lt;code&gt;Time&lt;/code&gt;, or &lt;code&gt;DateTime&lt;/code&gt; value&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;any(*types)&lt;/code&gt;: any value of the given types

&lt;ul&gt;
&lt;li&gt;Example: &lt;code&gt;any(string, integer)&lt;/code&gt; will only match on strings and integers&lt;/li&gt;
&lt;li&gt;If no types are specified, any value will match&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;primalize(YourPrimalizerClass)&lt;/code&gt;: primalizes the specified attribute with the given &lt;code&gt;Primalize::Single&lt;/code&gt; subclass

&lt;ul&gt;
&lt;li&gt;Example: &lt;code&gt;primalize(OrderSerializer)&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;object(**types)&lt;/code&gt;: a hash of the specified structure

&lt;ul&gt;
&lt;li&gt;Example: &lt;code&gt;object(id: integer, name: string)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Only the required keys need to be specified. The rest of the hash will pass.&lt;/li&gt;
&lt;li&gt;If no keys are specified, all of them are optional and it will match any hash.&lt;/li&gt;
&lt;li&gt;Ruby objects already define a method called &lt;code&gt;hash&lt;/code&gt; that's used for resolving hash keys and determining &lt;code&gt;Set&lt;/code&gt; inclusion, so we had to use the more language-agnostic name &lt;code&gt;object&lt;/code&gt;. If we'd used &lt;code&gt;hash&lt;/code&gt;, it would be effectively impossible to use a serializer class as a hash key or store it in a &lt;code&gt;Set&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These type declarations are composable, so we can set up some really complex type declarations if our API calls for it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="ss"&gt;user: &lt;/span&gt;&lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="ss"&gt;name: &lt;/span&gt;&lt;span class="n"&gt;optional&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="ss"&gt;email: &lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="ss"&gt;nicknames: &lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="ss"&gt;role: &lt;/span&gt;&lt;span class="n"&gt;optional&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;enum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'user'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'agent'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'manager'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'admin'&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
  &lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you find yourself writing a lot of nested objects, though, it might be worth extracting that to another serializer and using &lt;code&gt;primalize(ThatSerializer)&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What happens if the type check fails?
&lt;/h2&gt;

&lt;p&gt;In Ruby, we can't enforce that our models don't have the wrong types of attributes because any variable can hold any value, but we can run a type check at the time of serialization.&lt;/p&gt;

&lt;p&gt;Serializers have a default type-mismatch handler, which is a &lt;code&gt;call&lt;/code&gt;able object (as in, responds to &lt;code&gt;call&lt;/code&gt;) that receives &lt;code&gt;serializer_class, attribute, type, value&lt;/code&gt;. By default, it raises an exception, which is great for a development environment. In production, though, it might be preferable to let it pass through while still sending alerts.&lt;/p&gt;

&lt;p&gt;You can customize what you do when a type mismatch occurs on your individual serializer class by setting &lt;code&gt;MySerializerClass.type_mismatch_handler&lt;/code&gt; to your preferred method of handling it. To set it for &lt;em&gt;all&lt;/em&gt; model serializers, use &lt;code&gt;Primalize::Single.type_mismatch_handler&lt;/code&gt;. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="no"&gt;MySerializerClass&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;type_mismatch_handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;proc&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;serializer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kp"&gt;attr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;serializer&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;#&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="kp"&gt;attr&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; is specified as &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;inspect&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;, but is &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;inspect&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

  &lt;span class="no"&gt;Slack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;notify&lt;/span&gt; &lt;span class="s1"&gt;'#bugs'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;
  &lt;span class="no"&gt;BugTracker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;notify&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;

  &lt;span class="c1"&gt;# the return value of the block is the value to be used for that attribute&lt;/span&gt;
  &lt;span class="n"&gt;value&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Attribute conversion
&lt;/h2&gt;

&lt;p&gt;If an attribute isn't already the type you're expecting, you can provide a block to its type declaration to specify how to coerce it to that type. For example, if an &lt;code&gt;Address#city&lt;/code&gt; returns a &lt;code&gt;City&lt;/code&gt; object instead of just the name, we could serialize it like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AddressSerializer&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Primalize&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Single&lt;/span&gt;
  &lt;span class="n"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="ss"&gt;city: &lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;name&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="c1"&gt;# ...&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This indicates that we would call &lt;code&gt;name&lt;/code&gt; on the city that's in the address's &lt;code&gt;city&lt;/code&gt; attribute. The type check will still occur here. If the result of that block isn't a string, we'll trigger a type mismatch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Virtual attributes
&lt;/h2&gt;

&lt;p&gt;Sometimes you want to provide attributes that don't actually exist on the model being serialized. Just like how our server-side domain models don't need to match the database schema, a client doesn't need to know that the server-side model doesn't have a particular attribute. For those "virtual" attributes, you can define a method that will compute the attribute from what the model &lt;em&gt;does&lt;/em&gt; have:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AddressSerializer&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Primalize&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Single&lt;/span&gt;
  &lt;span class="n"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="ss"&gt;latitude: &lt;/span&gt;&lt;span class="n"&gt;float&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="ss"&gt;longitude: &lt;/span&gt;&lt;span class="n"&gt;float&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;# ...&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;latitude&lt;/span&gt;
    &lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;coordinates&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;latitude&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;longitude&lt;/span&gt;
    &lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;coordinates&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;longitude&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Composite Serializers
&lt;/h2&gt;

&lt;p&gt;That's just individual model serializers. There's also first-class support for returning associated objects in a single response without nesting them with &lt;code&gt;primalize(AnotherSerializer)&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;OrderResponse&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Primalize&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Many&lt;/span&gt;
  &lt;span class="n"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="ss"&gt;order: &lt;/span&gt;&lt;span class="no"&gt;OrderSerializer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="ss"&gt;line_items: &lt;/span&gt;&lt;span class="n"&gt;enumerable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;LineItemSerializer&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="ss"&gt;address: &lt;/span&gt;&lt;span class="no"&gt;AddressSerializer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="ss"&gt;customer: &lt;/span&gt;&lt;span class="no"&gt;CustomerSerializer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="c1"&gt;# Only required for corporate accounts&lt;/span&gt;
    &lt;span class="ss"&gt;purchase_order: &lt;/span&gt;&lt;span class="n"&gt;optional&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;PurchaseOrderSerializer&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I typically refer to these as "response serializers" and, while it's possible to serialize just the domain model in an API response, I almost always wrap them inside a serializer like this in case I need to return associated models in the future. If you have any API consumers you don't control, once you start returning the model as the top-level object, you're stuck with it most of the time.&lt;/p&gt;

&lt;p&gt;To use this serializer, you instantiate it with the keys you gave it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="no"&gt;OrderResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="ss"&gt;order: &lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="ss"&gt;line_items: &lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;line_items&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="ss"&gt;address: &lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;delivery_address&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="ss"&gt;customer: &lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;customer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="ss"&gt;purchase_order: &lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;purchase_order&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Primalize::Many&lt;/code&gt; doesn't traverse the object graph for you, and this might feel inconvenient, but it's intended functionality. It ensures that you can customize what you're sending. For example, if your &lt;code&gt;order&lt;/code&gt; is an ActiveRecord model, you may not want to send &lt;em&gt;all&lt;/em&gt; of its &lt;code&gt;line_items&lt;/code&gt; together. You might split them between &lt;code&gt;taxable_line_items&lt;/code&gt; and &lt;code&gt;non_taxable_line_items&lt;/code&gt; for some business reason. If the serializer traversed the association for you, you might not be able to specify that without adding a method on the &lt;code&gt;Order&lt;/code&gt; model. With Primalize, though, you can set up your serializer like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;OrderResponse&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Primalize&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Many&lt;/span&gt;
  &lt;span class="n"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="c1"&gt;# ...&lt;/span&gt;
    &lt;span class="ss"&gt;taxable_line_items: &lt;/span&gt;&lt;span class="n"&gt;enumerable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;LineItemSerializer&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="ss"&gt;non_taxable_line_items: &lt;/span&gt;&lt;span class="n"&gt;enumerable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;LineItemSerializer&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="no"&gt;OrderResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="c1"&gt;# ...&lt;/span&gt;
  &lt;span class="ss"&gt;taxable_line_items: &lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;line_items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;taxable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="ss"&gt;non_taxable_line_items: &lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;line_items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;non_taxable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The type checking comes into play with response serializers, as well. For example, if we passed a single line item in where we specified &lt;code&gt;enumerable(LineItemSerializer)&lt;/code&gt;, we would get an error. If we passed &lt;code&gt;nil&lt;/code&gt; for a field without specifying it as &lt;code&gt;optional&lt;/code&gt;, we would also get an error.&lt;/p&gt;

&lt;p&gt;All this helps make more robust API endpoints.&lt;/p&gt;

&lt;h2&gt;
  
  
  JSONAPI
&lt;/h2&gt;

&lt;p&gt;Some client applications might be written in a way that is more suited to the JSONAPI response structure. For such applications, there is a &lt;a href="https://github.com/jgaskins/primalize-jsonapi"&gt;JSONAPI wrapper&lt;/a&gt; for Primalize.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance
&lt;/h2&gt;

&lt;p&gt;I mentioned in the title that these serializers are fast, but I haven't touched on that part yet. When I &lt;a href="https://github.com/Netflix/fast_jsonapi/pull/42#issuecomment-364693269"&gt;benchmarked Primalize::JSONAPI against AMS and Netflix's &lt;code&gt;fast_jsonapi&lt;/code&gt; gem&lt;/a&gt;, it was over 1200% as fast as AMS and about half as fast as the Netflix gem. When you consider that the selling point of &lt;code&gt;fast_jsonapi&lt;/code&gt; is that … well … it's fast, and the difference between it and Primalize to serialize over 1000 models is 12ms (less than 12µs each), it's still close enough. On typical payloads (dozens of models, maybe 100), you'll have garbage-collection pauses longer than the difference between them.&lt;/p&gt;

&lt;p&gt;Also, that benchmark was using the JSONAPI wrapper, which is doing considerably more work (including its own naive traversal of associated models). If you don't require that particular format, you can stick with &lt;code&gt;Primalize::Many&lt;/code&gt; to cut off about 25% of that time.&lt;/p&gt;

&lt;p&gt;I don't know about you, but I'm certainly willing to trade 12ms (much less on typical payloads) for peace of mind that my response payloads match what the client expects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Development
&lt;/h2&gt;

&lt;p&gt;Primalize is a gem based on a pattern I implemented at a previous employer. It has been used in production for around two years and has stabilized the communication between their API and its consumers significantly while also improving its performance. With that said, there are still ways to improve.&lt;/p&gt;

&lt;p&gt;One idea that a coworker at that employer requested was the ability to generate RAML or Swagger documentation. For example, if an API endpoint returns an &lt;code&gt;OrderResponse&lt;/code&gt;, you should be able to generate the exact structure a client could expect for that endpoint. RAML docs can be &lt;a href="http://blog.getpostman.com/2015/11/04/supporting-raml-folders-in-postman/"&gt;imported into Postman&lt;/a&gt; for easier testing and consumption of REST APIs.&lt;/p&gt;

&lt;p&gt;Performance could probably be improved even more, both in the baseline &lt;code&gt;Primalize&lt;/code&gt; classes and the &lt;code&gt;JSONAPI&lt;/code&gt; wrapper.&lt;/p&gt;

&lt;p&gt;If you'd like to contribute, I'm always open to conversation (&lt;a href="https://gitter.im/jgaskins/primalize"&gt;gitter&lt;/a&gt; / &lt;a href="https://twitter.com/jamie_gaskins"&gt;twitter&lt;/a&gt; / comments on this post), &lt;a href="https://github.com/jgaskins/primalize/issues"&gt;suggestions&lt;/a&gt;, and &lt;a href="https://github.com/jgaskins/primalize/pulls"&gt;pull requests&lt;/a&gt;. Thanks, everyone! ❤️&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>json</category>
      <category>api</category>
      <category>web</category>
    </item>
  </channel>
</rss>
