<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Schneems</title>
    <description>The latest articles on DEV Community by Schneems (@schneems).</description>
    <link>https://dev.to/schneems</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/schneems"/>
    <language>en</language>
    <item>
      <title>A Fast Car Needs Good Brakes: How We Added Client Rate Throttling to the Heroku Platform API Gem</title>
      <dc:creator>Schneems</dc:creator>
      <pubDate>Tue, 07 Jul 2020 20:30:14 +0000</pubDate>
      <link>https://dev.to/heroku/a-fast-car-needs-good-brakes-how-we-added-client-rate-throttling-to-the-heroku-platform-api-gem-llk</link>
      <guid>https://dev.to/heroku/a-fast-car-needs-good-brakes-how-we-added-client-rate-throttling-to-the-heroku-platform-api-gem-llk</guid>
      <description>&lt;p&gt;When API requests are made one-after-the-other they'll quickly hit rate limits and when that happens:&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1138899094137651200-432" src="https://platform.twitter.com/embed/Tweet.html?id=1138899094137651200"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1138899094137651200-432');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1138899094137651200&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;That tweet spawned a discussion that generated a quest to add rate throttling logic to the &lt;a href="https://rubygems.org/gems/platform-api"&gt;&lt;code&gt;platform-api&lt;/code&gt;&lt;/a&gt; gem that Heroku maintains for talking to its API in Ruby.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If the term "rate throttling" is new to you, read &lt;a href="https://schneems.com/2020/06/25/rate-limiting-rate-throttling-and-how-they-work-together/"&gt;Rate limiting, rate throttling, and how they work together&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Heroku API uses &lt;a href="https://brandur.org/rate-limiting"&gt;Genetic Cell Rate Algorithm (GCRA) as described by Brandur in this post&lt;/a&gt; on the server-side. Heroku's &lt;a href="https://devcenter.heroku.com/articles/platform-api-reference#rate-limits"&gt;API docs&lt;/a&gt; state:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The API limits the number of requests each user can make per hour to protect against abuse and buggy code. Each account has a pool of request tokens that can hold at most 4500 tokens. Each API call removes one token from the pool. Tokens are added to the account pool at a rate of roughly 75 per minute (or 4500 per hour), up to a maximum of 4500. If no tokens remain, further calls will return 429 Too Many Requests until more tokens become available.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I needed to write an algorithm that never errored as a result of a 429 response. A "simple" solution would be to add a retry to all requests when they see a 429, but that would effectively DDoS the API. I made it a goal for the rate throttling client also to minimize its retry rate. That is, if the client makes 100 requests, and 10 of them are a 429 response that its retry rate is 10%. Since the code needed to be contained entirely in the client library, it needed to be able to function without distributed coordination between multiple clients on multiple machines except for whatever information the Heroku API returned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making client throttling maintainable
&lt;/h2&gt;

&lt;p&gt;Before we can get into what logic goes into a quality rate throttling algorithm, I want to talk about the process that I used as I think the journey is just as fascinating as the destination.&lt;/p&gt;

&lt;p&gt;I initially started wanting to write tests for my rate throttling strategy. I quickly realized that while testing the behavior "retries a request after a 429 response," it is easy to check. I also found that checking for quality "this rate throttle strategy is better than others" could not be checked quite as easily. The solution that I came up with was to write a simulator in addition to tests. I would simulate the server's behavior, and then boot up several processes and threads and hit the simulated server with requests to observe the system's behavior.&lt;/p&gt;

&lt;p&gt;I initially just output values to the CLI as the simulation ran, but found it challenging to make sense of them all, so I added charting. I found my simulation took too long to run and so I added a mechanism to speed up the simulated time. I used those two outputs to write what I thought was a pretty good rate throttling algorithm. The next task was wiring it up to the &lt;code&gt;platform-api&lt;/code&gt; gem.&lt;/p&gt;

&lt;p&gt;To help out I paired with &lt;a href="https://twitter.com/lolaodelola"&gt;a Heroku Engineer, Lola&lt;/a&gt;, we ended up making several PRs to a bunch of related projects, and that's its own story to tell. Finally, the day came where we were ready to get rate throttling into the &lt;code&gt;platform-api&lt;/code&gt; gem; all we needed was a review.&lt;/p&gt;

&lt;p&gt;Unfortunately, the algorithm I developed from "watching some charts for a few hours" didn't make a whole lot of sense, and it was painfully apparent that it wasn't maintainable. While I had developed a good gut feel for what a "good" algorithm did and how it behaved, I had no way of solidifying that knowledge into something that others could run with. Imagine someone in the future wants to make a change to the algorithm, and I'm no longer here. The tests I had could prevent them from breaking some expectations, but there was nothing to help them make a better algorithm.&lt;/p&gt;

&lt;h2&gt;
  
  
  The making of an algorithm
&lt;/h2&gt;

&lt;p&gt;At this point, I could explain the approach I had taken to build an algorithm, but I had no way to quantify the "goodness" of my algorithm. That's when I decided to throw it all away and start from first principles. Instead of asking "what would make my algorithm better," I asked, "how would I know a change to my algorithm is better" and then worked to develop some ways to quantify what "better" meant. Here are the goals I ended up coming up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimize average retry rate: The fewer failed API requests, the better&lt;/li&gt;
&lt;li&gt;Minimize maximum sleep time: Rate throttling involves waiting, and no one wants to wait for too long&lt;/li&gt;
&lt;li&gt;Minimize variance of request count between clients: No one likes working with a greedy co-worker, API clients are no different. No client in the distributed system should be an extended outlier&lt;/li&gt;
&lt;li&gt;Minimize time to clear a large request capacity: As the system changes, clients should respond quickly to changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I figured that if I could generate metrics on my rate-throttle algorithm and compare it to simpler algorithms, then I could show why individual decisions were made.&lt;/p&gt;

&lt;p&gt;I moved my hacky scripts for my simulation into a separate repo and, rather than relying on watching charts and logs, moved to have my simulation &lt;a href="https://github.com/zombocom/rate_throttle_client/blob/master/lib/rate_throttle_client/demo.rb"&gt;produce numbers that could be used to quantify and compare algorithms&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With that work under my belt, I threw away everything I knew about rate-throttling and decided to use science and measurement to guide my way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing a better rate-throttling algorithm with science: exponential backoff
&lt;/h2&gt;

&lt;p&gt;Earlier I mentioned that a "simple" algorithm would be to retry requests. A step up in complexity and functionality would be to retry requests after an exponential backoff. I coded it up and got some numbers for a simulated 30-minute run (which takes 3 minutes of real-time):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Avg retry rate: 60.08 %
Max sleep time: 854.89 seconds
Stdev Request Count: 387.82

Time to clear workload (4500 requests, starting_sleep: 1s):
74.23 seconds

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we've got baseline numbers, how could we work to minimize any of these values? In my initial exponential backoff model, I multiplied sleep by a factor of 2.0, what would happen if I increased it to 3.0 or decreased it to 1.2?&lt;/p&gt;

&lt;p&gt;To find out, I plugged in those values and re-ran my simulations. I found that there was a correlation between retry rate and max sleep value with the backoff factor, but they were inverse. I could lower the retry rate by increasing the factor (to 3.0), but this increased my maximum sleep time. I could reduce the maximum sleep time by decreasing the factor (to 1.2), but it increased my retry rate.&lt;/p&gt;

&lt;p&gt;That experiment told me that if I wanted to optimize both retry rate and sleep time, I could not do it via only changing the exponential factor since an improvement in one meant a degradation in the other value.&lt;/p&gt;

&lt;p&gt;At this point, we could theoretically do anything, but our metrics judge our success. We could put a cap on the maximum sleep time, for example, we could write code that says "don't sleep longer than 300 seconds", but it too would hurt the retry rate. The biggest concern for me in this example is the maximum sleep time, 854 seconds is over 14 minutes which is WAAAYY too long for a single client to be sleeping.&lt;/p&gt;

&lt;p&gt;I ended up picking the 1.2 factor to decrease that value at the cost of a worse retry-rate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Avg retry rate: 80.41 %
Max sleep time: 46.72 seconds
Stdev Request Count: 147.84

Time to clear workload (4500 requests, starting_sleep: 1s):
74.33 seconds

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Forty-six seconds is better than 14 minutes of sleep by a long shot. How could we get the retry rate down?&lt;/p&gt;

&lt;h2&gt;
  
  
  Incremental improvement: exponential sleep with a gradual decrease
&lt;/h2&gt;

&lt;p&gt;In the exponential backoff model, it backs-off once it sees a 429, but as soon as it hits a success response, it doesn't sleep at all. One way to reduce the retry-rate would be to assume that once a request had been rate-throttled, that future requests would need to wait as well. Essentially we would make the sleep value "sticky" and sleep before all requests. If we only remembered the sleep value, our rate throttle strategy wouldn't be responsive to any changes in the system, and it would have a poor "time to clear workload." Instead of only remembering the sleep value, we can gradually reduce it after every successful request. This logic is very similar to &lt;a href="https://en.wikipedia.org/wiki/TCP_congestion_control#Slow_start"&gt;TCP slow start&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;How does it play out in the numbers?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Avg retry rate: 40.56 %
Max sleep time: 139.91 seconds
Stdev Request Count: 867.73

Time to clear workload (4500 requests, starting_sleep: 1s):
115.54 seconds

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Retry rate did go down by about half. Sleep time went up, but it's still well under the 14-minute mark we saw earlier. But there's a problem with a metric I've not talked about before, the "stdev request count." It's easier to understand if you look at a chart to see what's going on:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6nfsn6zs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.dropbox.com/s/ipctuotj4tz1kwa/ExponentialBackoff.png%3Fraw%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6nfsn6zs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.dropbox.com/s/ipctuotj4tz1kwa/ExponentialBackoff.png%3Fraw%3D1" alt="Exponential sleep with gradual decrease chart" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here you can see one client is sleeping a lot (the red client) while other clients are not sleeping at all and chewing through all the available requests at the bottom. Not all the clients are behaving equitably. This behavior makes it harder to tune the system.&lt;/p&gt;

&lt;p&gt;One reason for this inequity is that all clients are decreasing by the same constant value for every successful request. For example, let's say we have a client A that is sleeping for 44 seconds, and client B that is sleeping for 11 seconds and both decrease their sleep value by 1 second after every request. If both clients ran for 45 seconds, it would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Client A) Sleep 44 (Decrease value: 1)
Client B) Sleep 11 (Decrease value: 1)
Client B) Sleep 10 (Decrease value: 1)
Client B) Sleep 9 (Decrease value: 1)
Client B) Sleep 8 (Decrease value: 1)
Client B) Sleep 7 (Decrease value: 1)
Client A) Sleep 43 (Decrease value: 1)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So while client A has decreased by 1 second total, client B has reduced by 4 seconds total, since it is firing 4x as fast (i.e., it's sleep time is 4x lower). So while the decrease rate is equal, it is not equitable. Ideally, we would want all clients to decrease at the same rate.&lt;/p&gt;

&lt;h2&gt;
  
  
  All clients created equal: exponential increase proportional decrease
&lt;/h2&gt;

&lt;p&gt;Since clients cannot communicate with each other in our distributed system, one way to guaranteed proportional decreases is to use the sleep value in the decrease amount:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;decrease_value = (sleep_time) / some_value

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where &lt;code&gt;some_value&lt;/code&gt; is a magic number. In this scenario the same clients A and B running for 45 seconds would look like this with a value of 100:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Client A) Sleep 44
Client B) Sleep 11
Client B) Sleep 10.89 (Decrease value: 11.00/100 = 0.1100)
Client B) Sleep 10.78 (Decrease value: 10.89/100 = 0.1089)
Client B) Sleep 10.67 (Decrease value: 10.78/100 = 0.1078)
Client B) Sleep 10.56 (Decrease value: 10.67/100 = 0.1067)
Client A) Sleep 43.56 (Decrease value: 44.00/100 = 0.4400)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now client A has had a decrease of 0.44, and client B has had a reduction of 0.4334 (11 seconds - 10.56 seconds), which is a lot more equitable than before. Since &lt;code&gt;some_value&lt;/code&gt; is tunable, I wanted to use a larger number so that the retry rate would be lower than 40%. I chose 4500 since that's the maximum number of requests in the GCRA bucket for Heroku's API.&lt;/p&gt;

&lt;p&gt;Here's what the results looked like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Avg retry rate: 3.66 %
Max sleep time: 17.31 seconds
Stdev Request Count: 101.94

Time to clear workload (4500 requests, starting_sleep: 1s):
551.10 seconds

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The retry rate went WAAAY down, which makes sense since we're decreasing slower than before (the constant decrease value previously was 0.8). Stdev went way down as well. It's about 8x lower. Surprisingly the max sleep time went down as well. I believe this to be a factor of a decrease in the number of required exponential backoff events. Here's what this algorithm looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HdwvR9TF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.dropbox.com/s/hityqgl9vgqcon8/ExponentialIncreaseProportionalDecrease.png%3Fraw%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HdwvR9TF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.dropbox.com/s/hityqgl9vgqcon8/ExponentialIncreaseProportionalDecrease.png%3Fraw%3D1" alt="Exponential increase proportional decrease chart" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The only problem here is that the "time to clear workload" is 5x higher than before. What exactly is being measured here? In this scenario, we're simulating a cyclical workflow where clients are running under high load, then go through a light load, and then back to a high load. The simulation starts all clients with a sleep value, but the server's rate-limit is reset to 4500. The time is how long it takes the client to clear all 4500 requests.&lt;/p&gt;

&lt;p&gt;What this metric of 551 seconds is telling me is that this strategy is not very responsive to a change in the system. To illustrate this problem, I ran the same algorithm starting each client at 8 seconds of sleep instead of 1 second to see how long it would take to trigger a rate limit:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tcImel-n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://heroku-blog-files.s3.amazonaws.com/posts/1594143623-CleanShot%25202020-07-07%2520at%252010.39.35%25402x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tcImel-n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://heroku-blog-files.s3.amazonaws.com/posts/1594143623-CleanShot%25202020-07-07%2520at%252010.39.35%25402x.png" alt="Exponential increase proportional decrease chart 7-hour torture test" width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The graph shows that it takes about 7 hours to clear all these requests, which is not good. What we need is a way to clear requests faster when there are more requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  The only remaining option: exponential increase proportional remaining decrease
&lt;/h2&gt;

&lt;p&gt;When you make a request to the Heroku API, it tells you how many requests you have left remaining in your bucket in a header. Our problem with the "proportional decrease" is mostly that when there are lots of requests remaining in the bucket, it takes a long time to clear them (if the prior sleep rate was high, such as in a varying workload). To account for this, we can decrease the sleep value quicker when the remaining bucket is full and slower when the remaining bucket is almost empty. To express that in an expression, it might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;decrease_value = (sleep_time * request_count_remaining) / some_value

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In my case, I chose &lt;code&gt;some_value&lt;/code&gt; to be the maximum number of requests possible in a bucket, which is 4500. You can imagine a scenario where workers were very busy for a period and being rate limited. Then no jobs came in for over an hour - perhaps the workday was over, and the number of requests remaining in the bucket re-filled to 4500. On the next request, this algorithm would reduce the sleep value by itself since 4500/4500 is one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;decrease_value = sleep_time * 4500 / 4500

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That means it doesn't matter how immense the sleep value is, it will adjust fairly quickly to a change in workload. Good in theory, how does it perform in the simulation?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Avg retry rate: 3.07 %
Max sleep time: 17.32 seconds
Stdev Request Count: 78.44

Time to clear workload (4500 requests, starting_sleep: 1s):
84.23 seconds

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This rate throttle strategy performs very well on all metrics. It is the best (or very close) to several metrics. Here's a chart:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K8VSV3nu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.dropbox.com/s/ixwy5quq2y8uyjw/ExponentialIncreaseProportionalRemainingDecrease.png%3Fraw%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K8VSV3nu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.dropbox.com/s/ixwy5quq2y8uyjw/ExponentialIncreaseProportionalRemainingDecrease.png%3Fraw%3D1" alt="Exponential increase proportional remaining decrease chart" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This strategy is the "winner" of my experiments and the algorithm that I chose to go into the &lt;code&gt;platform-api&lt;/code&gt; gem.&lt;/p&gt;

&lt;h2&gt;
  
  
  My original solution
&lt;/h2&gt;

&lt;p&gt;While I originally built this whole elaborate scheme to prove how my solution was optimal, I did something by accident. By following a scientific and measurement-based approach, I accidentally found a simpler solution that performed better than my original answer. Which I'm happier about, it shows that the extra effort was worth it. To "prove" what I found by observation and tinkering could be not only quantified by numbers but improved upon is fantastic.&lt;/p&gt;

&lt;p&gt;While my original solution had some scripts and charts, this new solution has tests covering the behavior of the simulation and charting code. My initial solution was very brittle. I didn't feel very comfortable coming back and making changes to it; this new solution and the accompanying support code is a joy to work with. My favorite part though is that now if anyone asks me, "what about trying " or "have you considered " is that I can point them at &lt;a href="https://github.com/zombocom/rate_throttle_client"&gt;my rate client throttling library&lt;/a&gt;, they have all the tools to implement their idea, test it, and report back with a swift feedback loop.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;gem 'platform-api', '~&amp;gt; 3.0'&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;While I mostly wanted to talk about the process of writing rate-throttling code, this whole thing started from a desire to get client rate-throttling into the &lt;code&gt;platform-api&lt;/code&gt; gem. Once I did the work to prove my solution was reasonable, we worked on a rollout strategy. We released a version of the gem in a minor bump with rate-throttling available, but with a "null" strategy that would preserve existing behavior. This release strategy allowed us to issue a warning to anyone depending on the original behavior. Then we released a major version with the rate-throttling strategy enabled by default. We did this first with "pre" release versions and then actual versions to be extra safe.&lt;/p&gt;

&lt;p&gt;So far, the feedback has been overwhelming that no one has noticed. We didn't cause any significant breaks or introduce any severe disfunction to any applications. If you've not already, I invite you to upgrade to 3.0.0+ of the &lt;code&gt;platform-api&lt;/code&gt; gem and give it a spin. I would love to hear your feedback.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Get ahold of Richard and stay up-to-date with Ruby, Rails, and other programming related content through a &lt;a href="https://www.schneems.com/mailinglist"&gt;subscription to his mailing list&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>api</category>
      <category>ruby</category>
      <category>algorithms</category>
    </item>
    <item>
      <title>Puma 4: Hammering Out H13s—A Debugging Story</title>
      <dc:creator>Schneems</dc:creator>
      <pubDate>Fri, 12 Jul 2019 12:53:55 +0000</pubDate>
      <link>https://dev.to/heroku/puma-4-hammering-out-h13s-a-debugging-story-fbm</link>
      <guid>https://dev.to/heroku/puma-4-hammering-out-h13s-a-debugging-story-fbm</guid>
      <description>&lt;p&gt;For quite some time we've received reports from our larger customers about a mysterious &lt;a href="https://devcenter.heroku.com/articles/error-codes#h13-connection-closed-without-response" rel="noopener noreferrer"&gt;H13 - Connection closed error&lt;/a&gt; showing up for Ruby applications. Curiously it only ever happened around the time they were deploying or scaling their dynos. Even more peculiar, it only happened to relatively high scale applications. We couldn't reproduce the behavior on an example app. This is a story about distributed coordination, the TCP API, and how we debugged and fixed a bug in Puma that only shows up at scale.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheroku-blog-files.s3.amazonaws.com%2Fposts%2F1562883126-Screenshot%25202019-06-23%252015.04.50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheroku-blog-files.s3.amazonaws.com%2Fposts%2F1562883126-Screenshot%25202019-06-23%252015.04.50.png" alt="Screenshot showing H13 errors"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connection closed
&lt;/h2&gt;

&lt;p&gt;First of all, what even is an H13 error? From our error page documentation:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This error is thrown when a process in your web dyno accepts a connection, but then closes the socket without writing anything to it.One example where this might happen is when a Unicorn web server is configured with a timeout shorter than 30s and a request has not been processed by a worker before the timeout happens. In this case, Unicorn closes the connection before any data is written, resulting in an H13.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Fun fact: Our error codes start with the letter of the component where they came from. Our Routing code is all written in Erlang and is named "Hermes" so any error codes from Heroku that start with an "H" indicate an error from the router.&lt;/p&gt;

&lt;p&gt;The documentation gives an example of an H13 error code with the unicorn webserver, but it can happen any time a connection is closed via a server, but there has been no response written. Here’s an example showing how to &lt;a href="https://github.com/hunterloftis/heroku-node-errcodes/blob/master/h13" rel="noopener noreferrer"&gt;reproduce a H13 explicitly with a node app&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;What does it mean for an application to get an H13? Essentially every one of these errors correlates to a customer who got an error page. Serving a handful of errors every time the app restarts or deploys or auto-scales is an awful user experience, so it's worth it to find and fix.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debugging
&lt;/h2&gt;

&lt;p&gt;I have maintained the Ruby buildpack for several years, and part of that job is to handle support escalations that are too tricky for our core supporters. In addition to the normal deployment issues, I've been developing an interest in performance, scalability, and web servers (I recently started helping to maintain the Puma webserver). Because of these interests, when a tricky issue comes in from one of our larger customers, especially if it only happens at scale, I take particular interest.&lt;/p&gt;

&lt;p&gt;To understand the problem, you need to know a little about the nature of sending distributed messages. Webservers are inherently distributed systems, and to make things more complicated, we often use distributed systems to manage our distributed systems.&lt;/p&gt;

&lt;p&gt;In the case of this error, it didn't seem to come from a customer's application code i.e. they didn't seem to have anything misconfigured. It also only seemed to happen when a dyno was being shut down.&lt;/p&gt;

&lt;p&gt;To shut down a dyno two things have to happen, we need to send a &lt;code&gt;SIGTERM&lt;/code&gt; to the processes on the dyno which &lt;a href="https://devcenter.heroku.com/articles/what-happens-to-ruby-apps-when-they-are-restarted" rel="noopener noreferrer"&gt;tells the webserver to safely shutdown&lt;/a&gt;. We also need to tell our router to stop sending requests to that dyno since it will be shut down soon.&lt;/p&gt;

&lt;p&gt;These two operations happen on two different systems. The dyno runs on one server, the router which serves our requests is a separate system. It's itself a distributed system. It turns out that while both systems get the message at about the same time, the router might still let a few requests trickle into the dyno that is being shut down after it receives the &lt;code&gt;SIGTERM&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That explains the problem then, right? The reason this only happens on apps with a large amount of traffic is they get so many requests there is more chance that there will be a race condition between when the router stops sending requests and the dyno receives the &lt;code&gt;SIGTERM&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That sounds like a bug with the router then right? Before we get too deep into the difficulties of distributed coordination, I noticed that other apps with just as much load weren't getting H13 errors. What did that tell me? It told me that the distributed behavior of our system wasn't to blame. If other webservers can handle this just fine, then we need to update our webserver, Puma in this case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reproduction
&lt;/h2&gt;

&lt;p&gt;When you're dealing with a distributed system bug that's reliant on a race condition, reproducing the issue can be a tricky affair. While pairing on the issue with another Heroku engineer, &lt;a href="https://twitter.com/chapambrose?lang=en" rel="noopener noreferrer"&gt;Chap Ambrose&lt;/a&gt;, we hit an idea. First, we would reproduce the H13 behavior in any app to figure out what &lt;a href="https://curl.haxx.se/libcurl/c/libcurl-errors.html" rel="noopener noreferrer"&gt;curl exit code&lt;/a&gt; we would get, and then we could try to reproduce the exact failure conditions with a more complicated example.&lt;/p&gt;

&lt;p&gt;A simple reproduction rack app looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Proc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;current_pid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pid&lt;/span&gt;
  &lt;span class="n"&gt;signal&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"SIGKILL"&lt;/span&gt;
  &lt;span class="no"&gt;Process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;kill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;signal&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;current_pid&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'200'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s1"&gt;'Content-Type'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'text/html'&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'A barebones rack app.'&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;run&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run this &lt;code&gt;config.ru&lt;/code&gt; with Puma and hit it with a request, you'll get a connection that is closed without a request getting written. That was pretty easy.&lt;/p&gt;

&lt;p&gt;The curl code when a connection is closed like this is &lt;code&gt;52&lt;/code&gt; so now we can detect when it happens.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl localhost:9292
  % Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload Upload Total Spent Left Speed
  0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (52) Empty reply from server

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A more complicated reproduction happens when SIGTERM is called but requests keep coming in. To facilitate that we ended up with a reproduction that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Proc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;puma_pid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;File&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'puma.pid'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;to_i&lt;/span&gt;
  &lt;span class="no"&gt;Process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;kill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"SIGTERM"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;puma_pid&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="no"&gt;Process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;kill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"SIGTERM"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;Process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pid&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'200'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s1"&gt;'Content-Type'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'text/html'&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'A barebones rack app.'&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;run&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;code&gt;config.ru&lt;/code&gt; rack app sends a &lt;code&gt;SIGTERM&lt;/code&gt; to itself and it's parent process on the first request. So other future requests will be coming in when the server is shutting down.&lt;/p&gt;

&lt;p&gt;Then we can write a script that boots this server and hits it with a bunch of requests in parallel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

&lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="sb"&gt;`puma &amp;gt; puma.log`&lt;/span&gt; &lt;span class="k"&gt;unless&lt;/span&gt; &lt;span class="no"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"NO_PUMA_BOOT"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="nb"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'fileutils'&lt;/span&gt;
&lt;span class="no"&gt;FileUtils&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mkdir_p&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"tmp/requests"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sb"&gt;`curl localhost:9292/?request_thread=&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sb"&gt; &amp;amp;&amp;gt; tmp/requests/requests&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sb"&gt;.log`&lt;/span&gt;
    &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="vg"&gt;$?&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;threads&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we run this reproduction, we see that it gives us the exact behavior we're looking to reproduce. Even better, when this code is deployed on Heroku we can see an H13 error is triggered:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2019-05-10T18:41:06.859330+00:00 heroku[router]: at=error code=H13 desc="Connection closed without response" method=GET path="/?request_thread=6" host=ruby-h13.herokuapp.com request_id=05696319-a6ff-4fad-b219-6dd043536314 fwd="&amp;lt;ip&amp;gt;" dyno=web.1 connect=0ms service=5ms status=503 bytes=0 protocol=https

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can get all this code and some more details on the &lt;a href="https://github.com/schneems/puma_connection_closed_reproduction" rel="noopener noreferrer"&gt;reproduciton script repo&lt;/a&gt;. And here's the &lt;a href="https://github.com/puma/puma/issues/1802" rel="noopener noreferrer"&gt;Puma Issue I was using to track the behavior&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing the Connection Closed Bug
&lt;/h2&gt;

&lt;p&gt;With a reproduction script in hand, it was possible for us to add debugging statements to Puma internals to see how it behaved while experiencing this issue.&lt;/p&gt;

&lt;p&gt;With a little investigation, it turned out that Puma never explicitly closed the socket of the connection. Instead, it relied on the process stopping to close it.&lt;/p&gt;

&lt;p&gt;What exactly does that mean? Every time you type a URL into a browser, the request gets routed to a server. On Heroku, the request goes to our router. The router then attempts to connect to a dyno (server) and pass it the request. The underlying mechanism that allows this is the webserver (Puma) on the dyno opening up a TCP socket on a $PORT. The request is accepted onto the socket, and it will sit there until the webserver (Puma) is ready to read it in and respond to it.&lt;/p&gt;

&lt;p&gt;What behavior then do we want to happen to avoid this H13 error? In the error case, the router tries to connect to the dyno, it's successful, and since Puma accepts the request, it expects the dyno to handle writing the request. If instead, the socket is closed when it tries to pass on the request it will know that Puma cannot respond. It will then retry passing the connection to another dyno. There are times when a webserver might reject a connection, for example, if the socket is full (default is only to allow 1024 connections on the socket backlog), or if the entire server has crashed.&lt;/p&gt;

&lt;p&gt;In our case, closing the socket is what we want. It correctly communicates to the router to do the right thing (try passing the connection to another dyno or hold onto it in the case all dynos are restarting).&lt;/p&gt;

&lt;p&gt;So then, the solution to the problem was to close the socket before attempting to shut down explicitly. Here's the &lt;a href="https://github.com/puma/puma/pull/1808" rel="noopener noreferrer"&gt;PR&lt;/a&gt;. The main magic is just one line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="vi"&gt;@launcher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close_binder_listeners&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're a worrier (I know I am) you might be afraid that closing the socket prevents any in-flight requests from being completed successfully. Lucky for us closing a socket prevents incoming requests but still allows us to respond to existing requests. If you don't believe me, think about how you could test it with one of my above example repos.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing distributed behavior
&lt;/h2&gt;

&lt;p&gt;I don't know if this behavior in Puma broke, or maybe it never worked. To try to make sure that it continues to work in the future, I wanted to write a test for it. I reached out to &lt;a href="https://twitter.com/touchingvirus?lang=en" rel="noopener noreferrer"&gt;dannyfallon&lt;/a&gt; who has helped out on some other Puma issues, and we remote paired on the tests using &lt;a href="https://tuple.app/" rel="noopener noreferrer"&gt;Pair With Tuple&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The tests ended up being &lt;a href="https://github.com/puma/puma/pull/1808/files#diff-ad8d9f1e0cf07519c2372ca5f60ca4d2" rel="noopener noreferrer"&gt;not terribly different than our example reproduction above&lt;/a&gt;, but it was pretty tricky to get it to have consistent behavior.&lt;/p&gt;

&lt;p&gt;With an issue that doesn't regularly show up unless it's on an app at scale, it's essential to test, as &lt;a href="https://twitter.com/mipsytipsy" rel="noopener noreferrer"&gt;Charity Majors&lt;/a&gt; would say "in production". We had several Heroku customers who were seeing this error try out my patch. They reported some other issues, which we were able to resolve, after fixing those issues, it looked like the errors were fixed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheroku-blog-files.s3.amazonaws.com%2Fposts%2F1562883272-59190728-7bf56a80-8b4b-11e9-8e01-84238fecf24c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheroku-blog-files.s3.amazonaws.com%2Fposts%2F1562883272-59190728-7bf56a80-8b4b-11e9-8e01-84238fecf24c.png" alt="Screenshot showing no more H13 errors"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Rolling out the fix
&lt;/h2&gt;

&lt;p&gt;Puma 4, which came with this fix, &lt;a href="https://github.com/puma/puma/releases/tag/v4.0.0" rel="noopener noreferrer"&gt;was recently released&lt;/a&gt;. We reached out to a customer who was using Puma and seeing a large number of H13s, and this release stopped them in their tracks.&lt;/p&gt;

&lt;p&gt;Learn more about Puma 4 below.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1143577608791220224-822" src="https://platform.twitter.com/embed/Tweet.html?id=1143577608791220224"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1143577608791220224-822');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1143577608791220224&amp;amp;theme=dark"
  }



&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>puma</category>
      <category>debugging</category>
      <category>tcp</category>
    </item>
    <item>
      <title>Debugging in Ruby—Busting a Year-old Bug in Sprockets</title>
      <dc:creator>Schneems</dc:creator>
      <pubDate>Tue, 26 Feb 2019 18:30:20 +0000</pubDate>
      <link>https://dev.to/heroku/debugging-in-rubybusting-a-year-old-bug-in-sprockets-42ki</link>
      <guid>https://dev.to/heroku/debugging-in-rubybusting-a-year-old-bug-in-sprockets-42ki</guid>
      <description>&lt;p&gt;Debugging is an important skill to develop as you work your way up to more complex projects. Seasoned engineers have a sixth sense for squashing bugs and have built up an impressive collection of tools that help them diagnose and fix bugs.&lt;/p&gt;

&lt;p&gt;I'm a member of Heroku’s Ruby team and creator of &lt;a href="https://www.codetriage.com/"&gt;CodeTriage&lt;/a&gt; and today we’ll look at the tools that I used on a journey to fix a gnarly bug in &lt;a href="https://github.com/rails/sprockets"&gt;Sprockets&lt;/a&gt;. Sprockets is an asset packaging system written in Ruby that lies at the heart of Rails’ asset processing pipeline.&lt;/p&gt;

&lt;p&gt;At the end of the post, you will know how Sprockets works and how to debug in Ruby.&lt;/p&gt;

&lt;h1&gt;
  
  
  Unexpected Behavior in Sprockets
&lt;/h1&gt;

&lt;p&gt;Sprockets gives developers a convenient way to compile, minify, and serve JavaScript and CSS files. Its extensible preprocessor pipeline has support for languages like CoffeeScript, SaaS, and SCSS. It is included in Rails via the &lt;a href="https://github.com/rails/sprockets-rails"&gt;sprockets-rails&lt;/a&gt; gem but can also be used in a standalone fashion, for example, to &lt;a href="http://recipes.sinatrarb.com/p/asset_management/sprockets"&gt;package Sinatra assets&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Earlier this month, we recorded a &lt;a href="https://www.youtube.com/watch?v=ZEoF_OWpXZY&amp;amp;feature=youtu.be"&gt;live-debugging session&lt;/a&gt; where we experienced particularly curious issue in Sprockets. We noticed that the bug broke the critical asset precompilation rake task, but only if the name of the project folder was changed between successive task executions. While project folder renames might seem relatively uncommon, they happen frequently on Heroku because each build happens in a unique directory name.&lt;/p&gt;

&lt;p&gt;While this bug itself is interesting, what’s even more interesting is learning from our debugging process. You can learn about the tools and steps we use to narrow down the root cause, and ultimately fix the bug.&lt;/p&gt;

&lt;p&gt;If you’d like to watch the full debugging session, check out the video or just follow along by reading the text below. We’ll walk through a debug workflow and find the root cause of this bug.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ZEoF_OWpXZY"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h1&gt;
  
  
  A Guide to Debugging in Ruby
&lt;/h1&gt;

&lt;p&gt;Head-scratching, non-obvious bugs are worth investigating because they may lead to other unnoticed or unreported bugs.&lt;/p&gt;

&lt;p&gt;Thankfully, Ruby comes with some powerful debugging tools that are easy to use for beginners. For a nice overview, check out this &lt;a href="https://www.rubyguides.com/2015/07/ruby-debugging/"&gt;Ruby debugging guide&lt;/a&gt; that covers basics like the difference between &lt;code&gt;p&lt;/code&gt; and &lt;code&gt;puts&lt;/code&gt; and also discusses a few of the interactive debuggers that are available in the Ruby ecosystem. For the rest of this post, however, you won’t need to know anything more advanced than &lt;code&gt;puts&lt;/code&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Reproducing the Bug
&lt;/h1&gt;

&lt;p&gt;The best way to learn debugging is just to dive in and try it. Let’s set up Sprockets in a local environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clone CodeTriage
&lt;/h2&gt;

&lt;p&gt;We need a Rails app to reproduce this bug so we’ll use an open source example. I am the creator of CodeTriage so it’s natural to use that application to demonstrate the problem, although you can reproduce it with any Rails app that uses Sprockets. CodeTriage has helped developers triage issues for &lt;a href="https://www.codetriage.com/what"&gt;thousands of open-source projects&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First, clone the CodeTriage repository, install dependencies, then switch to a branch that contains the code we need to reproduce the bug. A working Ruby environment is assumed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone git@github.com:codetriage/codetriage
$ cd codetriage

$ gem install bundler
$ bundle install

$ cp config/database.example.yml config/database.yml
$ git checkout 52d57d13

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Compile the Assets with Rake
&lt;/h2&gt;

&lt;p&gt;Next, execute the following steps to make the bug show up in our local environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ rm -rf tmp/cache
$ rm -rf public/assets

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, run the rake task for precompiling assets, which should succeed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ RAILS_ENV=production RAILS_SERVE_STATIC_FILES=1 RAILS_LOG_TO_STDOUT=1 bin/rake assets:precompile

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Rename the Project Folder
&lt;/h2&gt;

&lt;p&gt;Now, change the name of the project directory by copying its files into a new directory called &lt;code&gt;codetriage-after&lt;/code&gt; and deleting the old &lt;code&gt;codetriage&lt;/code&gt; directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd ..
$ cp -r codetriage codetriage-after
$ rm -rf codetriage
$ cd codetriage-after

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One more time, run the &lt;code&gt;assets:precompile&lt;/code&gt; rake task:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ RAILS_ENV=production RAILS_SERVE_STATIC_FILES=1 RAILS_LOG_TO_STDOUT=1 bin/rake assets:precompile

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The task should fail this time and produce the following error message:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EwnicKNm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://heroku-blog-files.s3.amazonaws.com/posts/1551205073-Screen%2520Shot%25202019-02-26%2520at%252010.17.16%2520AM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EwnicKNm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://heroku-blog-files.s3.amazonaws.com/posts/1551205073-Screen%2520Shot%25202019-02-26%2520at%252010.17.16%2520AM.png" alt="Screen Shot 2019-02-26 at 10" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sprockets is complaining that it can’t find the file &lt;code&gt;/private/tmp/repro/codetriage/app/assets/javascripts/application.js.erb&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This actually makes sense because in the last step we changed &lt;code&gt;codetriage&lt;/code&gt; to &lt;code&gt;codetriage-after&lt;/code&gt; as our project folder name, yet it is looking in &lt;code&gt;codetriage&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;(Note that the &lt;code&gt;/private/tmp/repro&lt;/code&gt; part of the path may be different for you based on where you cloned the &lt;code&gt;codetriage&lt;/code&gt; repository.)&lt;/p&gt;

&lt;h1&gt;
  
  
  Finding the Root Cause of the Bug
&lt;/h1&gt;

&lt;p&gt;Now that we have reproduced the bug in the video, the next step is to jump into the code of the Sprockets dependency at one of the lines in the stack trace in a method called &lt;code&gt;fetch_asset_from_dependency_cache&lt;/code&gt;. Your application depends on reading code in the libraries, which is required when debugging, especially once you have ruled out any issues with the code you’ve written.&lt;/p&gt;

&lt;h2&gt;
  
  
  Read gem code with bundle open
&lt;/h2&gt;

&lt;p&gt;Ruby’s de-facto gem manager &lt;a href="https://bundler.io/"&gt;Bundler&lt;/a&gt; contains a helpful command called &lt;code&gt;bundle open&lt;/code&gt; that opens the source code of a gem in your favorite editor. Run it like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ bundle open sprockets

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As long as you have a &lt;code&gt;$EDITOR&lt;/code&gt; or &lt;code&gt;$BUNDLER_EDITOR&lt;/code&gt; environment variable set, your preferred code editor will open to the project directory of the specified gem.&lt;/p&gt;

&lt;p&gt;Now you can browse the gem source code and even modify it, adding print statements to see the value of variables or trying out various fixes to see if they work.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Sprockets Caches Files
&lt;/h2&gt;

&lt;p&gt;The error message above implied that the wrong value is being stored in the Sprockets cache, so the next step is to look at the cache to confirm. The cache is stored on disk across many files, so first we need to find the specific file that contains the record we want to inspect. The key to that record is a digest of the Sprockets cache ID. That’s the value we’ll try to find in the files.&lt;/p&gt;

&lt;p&gt;Once you have the Sprockets code open, navigate to &lt;code&gt;lib/sprockets/loader.rb&lt;/code&gt;, where you’ll find the method &lt;code&gt;fetch_asset_from_dependency_cache&lt;/code&gt; toward the end. The documentation for this method provides insight into how Sprockets uses the idea of pipelines, histories, and dependencies to aid in caching. To get more of the backstory, I recommend watching the video starting from about the six-minute mark.&lt;/p&gt;

&lt;p&gt;We examined the on-disk contents of the Sprockets cache, looking for the ID cache key of a specific object in the Sprockets cache.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ grep -R 5d0abb0a8654a1f03d6b27 tmp/cache

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a helpful debugging command to file away for later. &lt;code&gt;grep -R&lt;/code&gt; searches through the &lt;code&gt;tmp/cache&lt;/code&gt; directory looking for any files that contain the string “5d0abb0a8654a1f03d6b27”, which is a Sprockets cache key. The -R flag is what makes it traverse directories recursively.&lt;/p&gt;

&lt;p&gt;In our case, the grep command does produce a cache file and we can use &lt;code&gt;cat&lt;/code&gt; to view the contents. Inside of that cache file, we find something unexpected: an absolute path to an asset. Sprockets should only cache relative paths, not absolute paths. Since we changed the absolute path to our project directory to create this bug, it’s quite likely that this is the culprit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Loading Up IRB
&lt;/h2&gt;

&lt;p&gt;To investigate further and confirm his suspicion, we fire up IRB, the interactive Ruby debugger. If you’re new to Ruby or the IRB, wee recommend &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-use-irb-to-explore-ruby"&gt;How To Use IRB to Explore Ruby&lt;/a&gt; as a good way to see how to use it. It’s simple but powerful and is a must-have in your Ruby debugging toolkit.&lt;/p&gt;

&lt;p&gt;We then use IRB to inspect the file cache from Sprockets point of view.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ irb
reirb(main):001:0&amp;gt; require ‘sprockets’
reirb(main):001:0&amp;gt; Sprockets::Environment.new.cache
reirb(main):001:0&amp;gt; Sprockets::Environment.new.cache.get(“5d0abb0a8654a1f03d6b27”)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unfortunately, this does not work because the cache key is not the same as the cache ID. So, we move on to confirming his hypothesis in another way. We still include this example here to let you know that IRB is something you can use for any Ruby code and specifically with the handy Environment class in Sprockets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fixing to_load and to_link
&lt;/h2&gt;

&lt;p&gt;To fix the bug, let’s modify the &lt;code&gt;to_load&lt;/code&gt; and &lt;code&gt;to_link&lt;/code&gt; methods in &lt;code&gt;loader.rb&lt;/code&gt; to force relative paths for objects going into the cache and coming out, using the &lt;code&gt;compress_from_root&lt;/code&gt; and &lt;code&gt;expand_from_root&lt;/code&gt; utility methods from Sprockets &lt;code&gt;base.rb&lt;/code&gt;. This ensures that absolute paths won’t make their way into the cache again, and consequently, that renaming the project directory won’t cause any issues in subsequent asset compilations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;cached_asset&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:metadata&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="ss"&gt;:to_load&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;cached_asset&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:metadata&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="ss"&gt;:to_load&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;empty?&lt;/span&gt;
  &lt;span class="n"&gt;cached_asset&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:metadata&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="ss"&gt;:to_load&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cached_asset&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:metadata&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="ss"&gt;:to_load&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;dup&lt;/span&gt;
  &lt;span class="n"&gt;cached_asset&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:metadata&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="ss"&gt;:to_load&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;map!&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;compress_from_root&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;cached_asset&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:metadata&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="ss"&gt;:to_link&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;cached_asset&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:metadata&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="ss"&gt;:to_link&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;empty?&lt;/span&gt;
  &lt;span class="n"&gt;cached_asset&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:metadata&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="ss"&gt;:to_link&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cached_asset&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:metadata&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="ss"&gt;:to_link&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;dup&lt;/span&gt;
  &lt;span class="n"&gt;cached_asset&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:metadata&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="ss"&gt;:to_link&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;map!&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;compress_from_root&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our &lt;a href="https://github.com/rails/sprockets/pull/547/files"&gt;pull request to fix the bug&lt;/a&gt; contains a test to prove that the fix works. Writing tests for your bug fixes is a best practice that you should always strive to follow. It’s the best way to prevent old bugs from crawling back into your codebase.&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrap-up
&lt;/h1&gt;

&lt;p&gt;Inevitably, your code will do something that couldn’t possibly happen. That’s when you need to get out your debugging tools. We hope that you have picked up a few new ones from this post.&lt;/p&gt;

&lt;p&gt;Your code will do something that couldn’t possibly happen in production. If your app runs on Heroku, make sure to familiarize yourself with the variety of &lt;a href="https://devcenter.heroku.com/articles/logging"&gt;logging solutions available&lt;/a&gt; as add-ons. These add-ons will make running and debugging problems on Heroku easier and they only take seconds to set up.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>sprockets</category>
      <category>debugging</category>
      <category>video</category>
    </item>
    <item>
      <title>Rails 5.2 Active Storage: Previews, Poppler, and Solving Licensing Pitfalls</title>
      <dc:creator>Schneems</dc:creator>
      <pubDate>Thu, 10 May 2018 15:58:51 +0000</pubDate>
      <link>https://dev.to/heroku/rails-52-active-storage-previews-poppler-and-solving-licensing-pitfalls-2b80</link>
      <guid>https://dev.to/heroku/rails-52-active-storage-previews-poppler-and-solving-licensing-pitfalls-2b80</guid>
      <description>&lt;p&gt;Rails 5.2 was just released last month with a major new feature: Active Storage. Active Storage provides file uploads and attachments for Active Record models with a variety of backing services (like AWS S3). While libraries like &lt;a href="https://github.com/thoughtbot/paperclip" rel="noopener noreferrer"&gt;Paperclip&lt;/a&gt; exist to do similar work, this is the first time that such a feature has been shipped with Rails. At Heroku, we consider cloud storage a best practice, so we've ensured that it works on our platform. In this post, we'll share how we prepared for the release of Rails 5.2, and how you can deploy an app today using the new Active Storage functionality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trust but Verify
&lt;/h2&gt;

&lt;p&gt;At Heroku, trust is our number one value. When we learned that Active Storage was shipping with Rails 5.2, we began experimenting with all its features. One of the nicest conveniences of Active Storage is its ability to preview PDFs and videos. Instead of linking to assets via text, a small screenshot of the PDF or Video will be extracted from the file and rendered on the page.&lt;/p&gt;

&lt;p&gt;The beta version of Rails 5.2 used the popular open source tools FFmpeg and MuPDF to generate video and PDF previews. We vetted these new binary dependencies through both our security and legal departments, where we found that MuPDF licensed under AGPL and requires a commercial license for some use. Had we simply added MuPDF to Rails 5.2+ applications by default, many of our customers would have been unaware that they needed to purchase MuPDF to use it commercially.&lt;/p&gt;

&lt;p&gt;The limiting AGPL license was brought to &lt;a href="https://github.com/rails/rails/pull/30667#issuecomment-332276198" rel="noopener noreferrer"&gt;public attention&lt;/a&gt; in September 2017. To prepare for the 5.2 release, our engineer &lt;a href="https://twitter.com/hone02" rel="noopener noreferrer"&gt;Terence Lee&lt;/a&gt; worked to update Active Storage so that this PDF preview feature could also use an open-source backend without a commercial license. We opened a PR to Rails &lt;a href="https://github.com/rails/rails/pull/31906" rel="noopener noreferrer"&gt;introducing the ability to use poppler PDF as an alternative to MuPDF&lt;/a&gt; in February of 2018. The PR was merged roughly a month later, and now any Rails 5.2 user - on or off Heroku - can render PDF previews without having to purchase a commercial license.&lt;/p&gt;

&lt;h2&gt;
  
  
  Active Storage on Heroku Example App
&lt;/h2&gt;

&lt;p&gt;If you've already got an app that implements Active Storage you can &lt;a href="https://devcenter.heroku.com/articles/active-storage-on-heroku?preview=1" rel="noopener noreferrer"&gt;jump over to our DevCenter documentation on Active Storage&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Alternatively, you can use our example app. Here is a Rails 5.2 app that is a digital bulletin board allowing people to post videos, pdfs, and images. You can &lt;a href="https://github.com/heroku/active_storage_with_previews_example" rel="noopener noreferrer"&gt;view the source on GitHub&lt;/a&gt; or deploy the app with the Heroku button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://heroku.com/deploy?template=https://github.com/heroku/active_storage_with_previews_example" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.herokucdn.com%2Fdeploy%2Fbutton.svg" alt="Deploy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: This example app requires a paid S3 add-on.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here's a video example of what the app does.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.dropbox.com%2Fs%2Fnxnsidob5j8bwev%2Factive-storage.gif%3Fraw%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.dropbox.com%2Fs%2Fnxnsidob5j8bwev%2Factive-storage.gif%3Fraw%3D1" alt="Active Storage on Heroku"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you open the home page, select an appropriate asset, and then submit the form. In the video, the &lt;code&gt;mp4&lt;/code&gt; file is uploaded to S3 and then a preview is generated on the fly by Rails with the help of &lt;code&gt;ffmpeg&lt;/code&gt;. Pretty neat.&lt;/p&gt;

&lt;h2&gt;
  
  
  Active Storage on Heroku
&lt;/h2&gt;

&lt;p&gt;If you deployed the example app using the button, it's already configured to work on Heroku via the &lt;code&gt;app.json&lt;/code&gt;, however if you've got your own app that you would like to deploy, how do you set it up so it works on Heroku?&lt;/p&gt;

&lt;p&gt;Following the &lt;a href="https://devcenter.heroku.com/articles/active-storage-on-heroku?preview=1" rel="noopener noreferrer"&gt;DevCenter documentation for Active Storage&lt;/a&gt;, you will need a file storage service that all your dynos can talk to. The example uses a Heroku add-on for S3 called &lt;a href="https://elements.heroku.com/addons/bucketeer" rel="noopener noreferrer"&gt;Bucketeer&lt;/a&gt;, though you can also use existing S3 credentials.&lt;/p&gt;

&lt;p&gt;To get started, add the AWS gem for S3 to the Gemfile, and if you’re modifying images as well add Mini Magick:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;gem&lt;/span&gt; &lt;span class="s2"&gt;"aws-sdk-s3"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;require: &lt;/span&gt;&lt;span class="kp"&gt;false&lt;/span&gt;
&lt;span class="n"&gt;gem&lt;/span&gt; &lt;span class="s1"&gt;'mini_magick'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'~&amp;gt; 4.8'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don't forget to &lt;code&gt;$ bundle install&lt;/code&gt; after updating your Gemfile.&lt;/p&gt;

&lt;p&gt;Next up, add an &lt;code&gt;amazon&lt;/code&gt; option in your &lt;code&gt;config/storage.yml&lt;/code&gt; file to point to the S3 config, we are using config set by Bucketeer in this example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;amazon&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;S3&lt;/span&gt;
  &lt;span class="na"&gt;access_key_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;%= ENV['BUCKETEER_AWS_ACCESS_KEY_ID'] %&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;secret_access_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;%= ENV['BUCKETEER_AWS_SECRET_ACCESS_KEY'] %&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;%= ENV['BUCKETEER_AWS_REGION'] %&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;bucket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;%= ENV['BUCKETEER_BUCKET_NAME'] %&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then make sure that your app is set to use the &lt;code&gt;:amazon&lt;/code&gt; config store in production:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;active_storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;service&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="ss"&gt;:amazon&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you forget this step, the default store is to use &lt;code&gt;:local&lt;/code&gt; which saves files to disk. This is not a scalable way to handle uploaded files in production. If you accidentally deploy this to Heroku, it will appear that the files were uploaded at first, but then they will disappear on random requests if you're running more than one dyno. The files will go away altogether when the dynos are restarted. You can get more information about &lt;a href="https://devcenter.heroku.com/articles/active-storage-on-heroku?preview=1#ephemeral-disk" rel="noopener noreferrer"&gt;ephemeral disk of Heroku in the DevCenter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Finally, the last thing you'll need to get this to work in production is to install a custom buildpack that installs the binary dependencies &lt;code&gt;ffmpeg&lt;/code&gt; and &lt;code&gt;poppler&lt;/code&gt; which are used to generate the asset previews:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;heroku buildpacks:add &lt;span class="nt"&gt;-i&lt;/span&gt; 1 https://github.com/heroku/heroku-buildpack-activestorage-preview
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you’re done you can deploy to Heroku!&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Active Storage to an Existing App
&lt;/h2&gt;

&lt;p&gt;If your app doesn't already have Active Storage, you can add it. First, you'll need to enable Active Storage blob storage by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;bin/rails active_storage:install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will add a migration that lets Rails track the uploaded files.&lt;/p&gt;

&lt;p&gt;Next, you'll need a model to "attach" files onto. You can use an existing model, or create a new model. In the example app a mostly empty &lt;code&gt;bulletin&lt;/code&gt; model is used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;bin/rails generate scaffold bulletin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, run the migrations on the application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;bin/rails db:migrate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the database is migrated, update the model to let Rails know that you intend to be able to attach files to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Bulletin&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ApplicationRecord&lt;/span&gt;
  &lt;span class="n"&gt;has_one_attached&lt;/span&gt; &lt;span class="ss"&gt;:attachment&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once that's done, we will need three more pieces: a form for uploading attachments, a controller to save attachments, and then a view for rendering the attachments.&lt;/p&gt;

&lt;p&gt;If you have an existing form you can add an attachment field via the &lt;code&gt;file_field&lt;/code&gt; view helper like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight erb"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;%=&lt;/span&gt; &lt;span class="n"&gt;form&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;file_field&lt;/span&gt; &lt;span class="ss"&gt;:attachment&lt;/span&gt; &lt;span class="cp"&gt;%&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see an example of a form with an attachment in &lt;a href="https://github.com/heroku/active_storage_with_previews_example/blob/ab0370f77f35f8eb0813727b8d49758926450f5e/app/views/welcome/_upload.html.erb#L14" rel="noopener noreferrer"&gt;the example app&lt;/a&gt;. Once you have a form, you will need to save the attachment.&lt;/p&gt;

&lt;p&gt;In this example app, the home page contains the form and the view. In the &lt;a href="https://github.com/heroku/active_storage_with_previews_example/blob/ab0370f77f35f8eb0813727b8d49758926450f5e/app/controllers/bulletins_controller.rb#L26-L32" rel="noopener noreferrer"&gt;bulletin controller&lt;/a&gt; the attachment is saved and then the user is redirected back to the main bulletin list:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;create&lt;/span&gt;
  &lt;span class="vi"&gt;@bulletin&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Bulletin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="vi"&gt;@bulletin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attachment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:bulletin&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="ss"&gt;:attachment&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="vi"&gt;@bulletin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save!&lt;/span&gt;

  &lt;span class="n"&gt;redirect_back&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;fallback_location: &lt;/span&gt;&lt;span class="n"&gt;root_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, in the &lt;a href="https://github.com/heroku/active_storage_with_previews_example/blob/ab0370f77f35f8eb0813727b8d49758926450f5e/app/views/welcome/index.erb" rel="noopener noreferrer"&gt;welcome view&lt;/a&gt; we iterate through each of the bulletin items and, depending on the type of attachment we have, render it differently.&lt;/p&gt;

&lt;p&gt;In Active Storage the &lt;code&gt;previewable?&lt;/code&gt; method will return true for PDFs and Videos provided the system has the right binaries installed. The &lt;code&gt;variable?&lt;/code&gt; method will return true for images if &lt;code&gt;mini_magick&lt;/code&gt; is installed. If neither of these things is true then, the attachment is likely a file that is best viewed after being downloaded. Here's &lt;a href="https://github.com/heroku/active_storage_with_previews_example/blob/ab0370f77f35f8eb0813727b8d49758926450f5e/app/views/welcome/index.erb#L24-L37" rel="noopener noreferrer"&gt;how we can represent that logic&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight erb"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;ul&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"no-bullet"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="cp"&gt;&amp;lt;%&lt;/span&gt; &lt;span class="vi"&gt;@bulletin_list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;bulletin&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="cp"&gt;%&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;li&amp;gt;&lt;/span&gt;
      &lt;span class="cp"&gt;&amp;lt;%&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;bulletin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attachment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;previewable?&lt;/span&gt; &lt;span class="cp"&gt;%&amp;gt;&lt;/span&gt;
        &lt;span class="cp"&gt;&amp;lt;%=&lt;/span&gt; &lt;span class="n"&gt;link_to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_tag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bulletin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attachment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;preview&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;resize: &lt;/span&gt;&lt;span class="s2"&gt;"200x200&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; &lt;span class="n"&gt;rails_blob_path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bulletin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attachment&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;disposition: &lt;/span&gt;&lt;span class="s2"&gt;"attachment"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="cp"&gt;%&amp;gt;&lt;/span&gt;
      &lt;span class="cp"&gt;&amp;lt;%&lt;/span&gt; &lt;span class="k"&gt;elsif&lt;/span&gt; &lt;span class="n"&gt;bulletin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attachment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;variable?&lt;/span&gt; &lt;span class="cp"&gt;%&amp;gt;&lt;/span&gt;
        &lt;span class="cp"&gt;&amp;lt;%=&lt;/span&gt; &lt;span class="n"&gt;link_to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_tag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bulletin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attachment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;variant&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;resize: &lt;/span&gt;&lt;span class="s2"&gt;"200x200"&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; &lt;span class="n"&gt;rails_blob_path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bulletin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attachment&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;disposition: &lt;/span&gt;&lt;span class="s2"&gt;"attachment"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="cp"&gt;%&amp;gt;&lt;/span&gt;
      &lt;span class="cp"&gt;&amp;lt;%&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="cp"&gt;%&amp;gt;&lt;/span&gt;
        &lt;span class="cp"&gt;&amp;lt;%=&lt;/span&gt; &lt;span class="n"&gt;link_to&lt;/span&gt; &lt;span class="s2"&gt;"Download file"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rails_blob_path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bulletin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attachment&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;disposition: &lt;/span&gt;&lt;span class="s2"&gt;"attachment"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="cp"&gt;%&amp;gt;&lt;/span&gt;
      &lt;span class="cp"&gt;&amp;lt;%&lt;/span&gt; &lt;span class="k"&gt;end&lt;/span&gt; &lt;span class="cp"&gt;%&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/li&amp;gt;&lt;/span&gt;
  &lt;span class="cp"&gt;&amp;lt;%&lt;/span&gt; &lt;span class="k"&gt;end&lt;/span&gt; &lt;span class="cp"&gt;%&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/ul&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you've got all these pieces in your app, and configured Active Storage to work in production, your users can enjoy uploading and downloading files with ease.&lt;/p&gt;

</description>
      <category>rails</category>
      <category>ruby</category>
      <category>aws</category>
      <category>activestorage</category>
    </item>
    <item>
      <title>The Programmer's Guide to Pairing on Pregnancy</title>
      <dc:creator>Schneems</dc:creator>
      <pubDate>Wed, 07 Jun 2017 16:07:19 +0000</pubDate>
      <link>https://dev.to/schneems/the-programmers-guide-to-pairing-on-pregnancy</link>
      <guid>https://dev.to/schneems/the-programmers-guide-to-pairing-on-pregnancy</guid>
      <description>

&lt;p&gt;title: "The Programmer's Guide to Pairing on Pregnancy"&lt;br&gt;
layout: post&lt;br&gt;
published: true&lt;br&gt;
date: 2017-06-07&lt;br&gt;
permalink: /2017/06/07/the-programmers-guide-to-pairing-on-pregnancy/&lt;br&gt;
categories:&lt;/p&gt;

&lt;h2&gt;
  
  
      - ruby
&lt;/h2&gt;

&lt;p&gt;You don't have to be physically carrying a child to be involved in a pregnancy. If you pair program, you know that you don't have to have your hands physically on the keyboard to contribute to the experience. I'm currently on track for my second little one and wanted to give a shout out to some things I've seen that partners of all genders have done to help with pregnancies. While I cannot physically carry my child to term, that doesn't mean pregnancy is a passive event for me. Let's get started.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: I'm using the word partner to refer to anyone related to the person carrying the child.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Go to Appointments
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Difficulty: Easy&lt;/li&gt;
&lt;li&gt;Required: Extremely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It may seem like a small thing, but go with your partner to pregnancy checkups. Even if you've got that big meeting the next day, it's something both of you should be there for. Besides being supportive and holding their hand while bloodwork is  drawn, there are a few excellent reasons to go together.&lt;/p&gt;

&lt;p&gt;While everyone plans for the best, one out of five pregnancies ends in a miscarriage. If you find out something is wrong during a visit, your partner shouldn't have to be there alone.&lt;/p&gt;

&lt;p&gt;Even when things are going well, there are so many details the doctor may talk through, that it can be helpful to have a second set of ears. Heads up: there will be handouts, and you may consider taking notes (if that's your thing). There were more than a few times my wife and I had slightly different takeaways, and after comparing notes, we were forced to clarify to make sure we had the right information.&lt;/p&gt;

&lt;p&gt;Another way to think of this: if your partner cannot skip something, neither should you. You'll never be able to contribute equally to the carrying and delivering of the child, so the least you can do is be there through every step of the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Birth Plan
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Difficulty: Easy&lt;/li&gt;
&lt;li&gt;Required: Major brownie points if you bring this up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Want to delight your partner when they're not pregnant: surprise them with flowers for no reason. During pregnancy, you can be the "best partner ever"™ by proactively considering things you'll need to do and driving them forwards instead of waiting for your baby-momma to take charge of everything.&lt;/p&gt;

&lt;p&gt;One of the first things you'll need is a birth plan. At a high level, this consists of where the two of you want to give birth, where you want to have your checkups, and how you want to give birth. There are more details for an actual "birth plan", but at the beginning these are what you'll need. I bring this up early because the answers to the three questions aren't independent. If you find an OB you like and start doing your checkups there, and the  find they will not support your method of delivery, you'll have to switch doctors, and that will be a pain.&lt;/p&gt;

&lt;p&gt;Tell your partner you want to talk about a birth plan. You'll need to do some research on available options. At a very high-level, there are birthing centers, hospitals or home births. We gave birth at a birthing center that was attached to a hospital, so there are also spectrums. You can take tours of hospitals and birthing centers. I highly recommend doing this. You can look up times and book the tours. Afterward, talk about what you like and what you don't.&lt;/p&gt;

&lt;p&gt;I didn't do this with my first kid, and I wish I had. It kinda sucks that not only did I make my wife carry a child and deliver it, but she had to do all the paper-work of booking tours and bringing up all the various "talks" that we had to have early on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Read a Pregnancy Book
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Difficulty: Medium&lt;/li&gt;
&lt;li&gt;Required: In some form, yes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not assuming all partners are men, but don't get a "dude" book. Not just for pregnancy but in life, if something has "dude" in the title in a non-ironic way. At best it's light on information and heavy on assumptions. At worst it might be misogynisticly awful. At the end of the day reading anything is better than nothing, but quality varies wildly.&lt;/p&gt;

&lt;p&gt;The first time around I chose a book that was awful. It had a bullet point list of things that will happen at each phase of pregnancy and while it was written for men, it had little advice on how to help my wife. It was mostly "don't watch too much football".&lt;/p&gt;

&lt;p&gt;There are two categories of baby books: "what to expect," and "after the birth event." Many books try to cram in both subjects. I think as someone who has never experienced pregnancy first hand, you want a good book focused on pregnancy. I recommend &lt;a href="https://www.amazon.com/gp/product/0307237087?ie=UTF8&amp;amp;tag=schneems-20&amp;amp;camp=1789&amp;amp;linkCode=xm2&amp;amp;creativeASIN=0307237087"&gt;"From the Hips"&lt;/a&gt;. Often books tend to lean one heavily towards one philosophy; this book presented many along with science to back them up and different quotes from real pregnant women to give you a very well rounded view.&lt;/p&gt;

&lt;p&gt;If you don't like a book, get another one. If you're not retaining information, it's no good.&lt;/p&gt;

&lt;h2&gt;
  
  
  Birth Classes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Difficulty: Easy&lt;/li&gt;
&lt;li&gt;Required: Absolutely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is as much variety in birthing classes as there are in ways to give birth. While your center may offer a one day "birthing" class, I recommend finding a way to supplement the information. The amount of information and topics is overwhelming. The more spread-out, the better you'll retain it. Do some research, find a few classes around town and ask your partner which ones they would like to check out.&lt;/p&gt;

&lt;p&gt;We used a doula (which is another thing you should maybe check out) and as part of the service with the doula collective, Austin Born, there were classes. Some of them were free as a way to meet the doulas, others were a part of our doula experience. We even signed up for an extra one day intensive "what to know after pregnancy" class. Topics varied: for example, a class on breastfeeding, or the different types of medical interventions that might come up while giving birth, and what kinds of rights we had as well as good questions to ask to make decisions.&lt;/p&gt;

&lt;p&gt;Whether you hire a doula or not, I do recommend getting some form of birth coach training. Labor lasts a long time, especially for the first pregnancy. If you can support your partner, even a little bit it can help the overall process tremendously. You'll learn things like different labor positions, phrases you can say to coach your partner to breath. Things like holding a partner's hands or rubbing their shoulders can have a big positive effect. Be prepared with food and drinks that are easy to eat in small bites. It's also your job to time contractions and record them (there are apps for this).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Protip: Depending on the place where you're giving birth, some won't let women eat once they are admitted (ask at your location). Even if they can, they might not be in the mood when they are far along, but it's important to consume calories, so they keep up energy. You can buy &lt;a href="https://www.amazon.com/gp/product/B01KXCJ7BW?ie=UTF8&amp;amp;tag=schneems-20&amp;amp;camp=1789&amp;amp;linkCode=xm2&amp;amp;creativeASIN=B01KXCJ7BW"&gt;"mini" ice cube trays from Amazon&lt;/a&gt; and fill them with a sugary liquid your partner likes. Keep them in a Ziploc bag in the freezer until it's ready to go to be admitted to your birth place. We made raspberry leaf tea with honey, let it cool and made ice cubes out of those. In the last few hours my wife described them as "a lifesaver". On this note, don't forget to eat and drink yourself. The longer labor goes, the more your partner needs you.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Practice Being your Best Self: Brush and Floss
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Difficulty: Hard&lt;/li&gt;
&lt;li&gt;Required: Recommended, not required&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Close your eyes and imagine what the "ideal" version of you would be. What activity do you know you &lt;strong&gt;should&lt;/strong&gt; be doing that you aren't? While pregnancy may be stressful to you and your partner, having to wake up 8 times a night will push you past mental and emotional limits you didn't know you had. Not to scare you, humans have coped with infants for many generations, and most of our ancestors turned around and signed up to do it multiple times. I'm going back for seconds, but don't kid yourself that it will be easier after the baby comes out. You'll be more successful if you build a plan and start sticking to it now. Don't wait until it's too late.&lt;/p&gt;

&lt;p&gt;When you lose sleep, you lose patience, and willpower. Once you find out that your partner is pregnant, you have 9 months to cement all the best habits you wish you had. If it takes willpower to do it before you give birth, it will be virtually impossible after.&lt;/p&gt;

&lt;p&gt;One item for me was brushing my teeth. I always hated it. Because of this I intentionally took this time to develop it into a habit. Now I sometimes find myself laying in bed with no memory of brushing my teeth and only a minty fresh breath to prove that I did.&lt;/p&gt;

&lt;p&gt;Maybe you already brush your teeth, maybe there's something else you wish you could do. Nick Means &lt;a href="https://www.youtube.com/watch?v=xT_YaduPYlk"&gt;gave a great talk&lt;/a&gt; about building habits and not needing to rely on willpower at RailsConf.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wean Coffee and Alcohol
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Difficulty: Intermediate&lt;/li&gt;
&lt;li&gt;Required: Suggested, not required&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One thing I wish I had done in the first pregnancy: limit coffee and alcohol. I love coffee, I love the taste, I love the feeling when the caffeine hits your blood stream early in the morning.&lt;/p&gt;

&lt;p&gt;I also love a good craft beer, or a scotch, recently even Texas bourbon. I still love these things. The problem is that when we have a bad day, or when things go south, we are primed to say "geez I need a drink." After the baby comes, you'll not be able to leave the house as much, and it's tempting to turn parenthood into a drinking game in the evening. Baby is crying: have a drink. Baby spit up on itself: have a drink.&lt;/p&gt;

&lt;p&gt;The problem is that your body is already being pushed to its limits by a lack of sleep. Want to know what's worse than having to get up in the middle of the night to quiet a screaming infant? Trying to do that with a hangover. Even if you didn't drink that much, you might find your predisposition for brain splitting headaches after a drink has substantially increased. You'll also get even worse sleep than without the booze. But the worse part is that it's a cycle of dependence, the next day you feel even WORSE, so what do you do when you had "a bad day" at the end of the day? Reach for a pint of course. Don't go down that path.&lt;/p&gt;

&lt;p&gt;For this pregnancy I'm trying to wean myself off of booze. I'm not going entirely off of it. My wife suggested that I don't drink at home, so that I don't associate everyday tasks, like unwinding and reading a book with also having a beer. I still have a few drinks when I go out. I'm hoping this will make it less tempting to drink when I'm stressed. It's only been a few weeks and I've already lost weight and I feel better, so far so good.&lt;/p&gt;

&lt;p&gt;I weaned off coffee because I have a hard time napping. Through the first pregnancy I maybe took 3-4 naps total. It would have been much easier if I could take the standard advice "sleep when your baby sleeps."&lt;/p&gt;

&lt;p&gt;If you don't have this napping problem, don't worry about it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want a decaf that is 90% as good as regular, I recommend Third Coast Coffee Roasters. I know it still has some caffeine and also that the process of removing it isn't the best in the world. You do what's right for you and your life. I've lost my regular coffee habbit, you can pry my decaf from my hot, freshly brewed hands.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Side note: drink lots of water. I recommend a &lt;a href="https://www.amazon.com/gp/product/B015DJF7KK/ref=as_li_tl?ie=UTF8&amp;amp;camp=1789&amp;amp;creative=9325&amp;amp;creativeASIN=B015DJF7KK&amp;amp;linkCode=as2&amp;amp;tag=schneems-20&amp;amp;linkId=0253c8f4157002cf1fd0dfb1094f88de"&gt;Camelbak Podium&lt;/a&gt; because you can drink it in bed without having to get up or without spilling. After the birth, you'll be waking up a lot more, and every little thing to make your life a little easier is worth it. Also, get your partner one as they'll be waking up to go to the bathroom well before the big event comes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Morning Sickness &amp;amp; Cravings
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Difficulty: Easy&lt;/li&gt;
&lt;li&gt;Required: Absolutely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The biggest misnomer in pregnancy is "morning sickness," because, surprise, it happens all day long. It's usually portrayed as a minor inconvenience; a woman throws up casually as a way to indicate to the audience she is pregnant (squeeeee!!). In reality, they may never throw-up, they may have overwhelming nausea for days at a time, or anywhere in between. While it's different for each woman, my wife once described it as worse than labor when taken as a whole. It's no joke, so you need to be ready to help any way you can.&lt;/p&gt;

&lt;p&gt;One way to help is to observe the things that they are sensitive to. Usually women have smell sensitivity. My wife could not stand toothpaste, so we found a salt based toothpaste that was unflavored. With kid number two she is set off by changing diapers. That means all number-twos from kid number-one are on me to change. It's a small thing to do for the woman carrying your child.&lt;/p&gt;

&lt;p&gt;Pop culture represents pregnancy as somewhat of a running joke on partners. They're seen running to the store at midnight while a woman in full makeup sheepishly asks (or demands) for pickles and ice-cream. They lug themselves to the store and back, only to find the items are no longer wanted or their partner is peacefully asleep by the time they get back. They slouch into an oversize chair, look a the pickles with a sense of defeat and the studio audience laughs.&lt;/p&gt;

&lt;p&gt;The real joke, of course, is expecting Hollywood to be remotely similar to real life. Cravings are different for every woman. Usually, when my wife asked me for some very specific food it was because she was feeling nauseous. We've found that coconut water helps her, so I try to keep it around or ask at virtually every restaurant if they carry it regardless if it's needed yet. Other times, it was just a matter of being flexible. Being willing to get her food while we are both in bed, or making an unexpected stop at a fast food restaurant for something fried.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gear
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Difficulty: Easy&lt;/li&gt;
&lt;li&gt;Required: Yes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You'll need a minimum amount of baby gear coming home from the hospital. A car seat, a baby hat, and some clothes. Once you get home you'll also want some swaddles, plenty of fresh diapers (maybe consider cloth if you're the environmental type) and a fair share of onesies. You will also need somewhere for the baby to sleep. There's other things like mittens (so they don't scratch themselves, babies are weird) and bigger items like strollers or swings. You can do research on these things without waiting for your partner. Of course you should talk with them before either of you buy big items, but having some knowledge of the available options will make these conversations much smoother.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Difficulty: Easy&lt;/li&gt;
&lt;li&gt;Required: Are you freaking kidding me, of course&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The most important part of being a good partner is being there for your baby-momma. You might not get everything right, and there's almost always room for improvement, but the least you can do is make a conscious effort to make the process easy on her and the two of you. While I wrote a lot here, and things can get overwhelming remember that 9 months is a long time. It means that you can take things step by step without having to rush it all. It also means that you have no excuse for skipping out on classes, working towards building better habits, or learning a thing or two about pregnancy. Above all else pay attention to your partner. From my favorite baby delivery joke: it took two of you to get that baby in there, it will take at least two to get out.&lt;/p&gt;




&lt;p&gt;If you liked this post join my &lt;a href="https://www.schneems.com/mailinglist"&gt;free mailing list&lt;/a&gt; for more. All links to products on Amazon are affiliate links. &lt;/p&gt;

</description>
      <category>pregnancy</category>
      <category>pairing</category>
      <category>practices</category>
    </item>
    <item>
      <title>Writers Write</title>
      <dc:creator>Schneems</dc:creator>
      <pubDate>Tue, 30 May 2017 15:22:45 +0000</pubDate>
      <link>https://dev.to/schneems/writers-write</link>
      <guid>https://dev.to/schneems/writers-write</guid>
      <description>&lt;p&gt;I've been writing more recently. One of the biggest reasons is that I've been writing more recently. Writing begets writing; the more I do it, the easier it is to do it more. I've found diet to be similar. When I'm eating fresh fruits and veggies, it's what my body craves. But as soon as I "treat" myself with a bag of chips or a fatty big honking slice of greasy pizza, guess what my body wants? More of the same.&lt;/p&gt;

&lt;p&gt;I've written off and on for quite some time. Someone recently pointed out how you could tell how long someone had been in their job by when their last blog post was written. While coders tend to be more comfortable with a text format than many other professions, we're not the best at writing consistently.&lt;/p&gt;

&lt;p&gt;I wrote another post about how &lt;a href="https://dev.to/schneems/coders-code"&gt;coders code&lt;/a&gt; and thought it was fitting to mention how I've made a plan to publish a blog post once a week. I've been keeping up the streak for awhile now and feel pretty good about it. One trick I'm using is that I've got a few posts written ahead of schedule, so it doesn't feel like I'm down to the wire on deadlines. I'm on a plane right now without WiFi and feeling particularly inspired so this is actually the third post I've written today.&lt;/p&gt;

&lt;p&gt;In the past I've traditionally written technically focused articles. I still do that and will keep publishing them. The interesting thing, about forcing myself to sit down and write no matter what, is that I can't wait months for that huge open source pull request I'm working on to drop before I blog about it. I've got to talk about what's on my mind right now, ready or not. Sometimes it ends up being awful and when that happens, since I have some buffer room in my schedule, I can throw those away. Sometimes it ends up taking me places I wouldn't have gone before. When you go through life with an eye out looking for a story, you'll be amazed at how many you find.&lt;/p&gt;

&lt;p&gt;I tend to preach when I write. I like to have a message. I like to try to get people motivated or active. I do the same when I give talks. Instead of "look at what I did" I usually also add a "and here's how you can do it too" flare. It's a bit of a crutch at this point as sometimes things can be interesting without having to be instructive. While I don't necessarily recommend blogging as it is tedious and time consuming, I do recommend you write.&lt;/p&gt;

&lt;p&gt;As you write you'll find an audience of one. Read your own writing. Take notes. You'll be surprised at what "past you" can teach "future you". Keep writing and maybe you'll find a larger audience. Write wherever possible: Write commit messages, write GitHub comments, write design docs, write READMEs, and method docs. Write. The more you do it, the easier it will be.&lt;/p&gt;

&lt;p&gt;Writing is an act of communication. Programmers are really in the communication business. We communicate with machines in an arcane language to tell them how to do our business. We must share our requirements and our efforts with others, co-workers, bosses, co-founders, and customers.&lt;/p&gt;

&lt;p&gt;While we often think of writing as a one-to-many medium (books, magazines, blog posts) our most important writings can be the ones that are day-to-day. While your primary job title might not be "writer", writing is a tier-one method of communication.&lt;/p&gt;

&lt;p&gt;If you write, I can't guarantee fame or fortune. I can't promise you an audience. However, I will give you my word that writing will bring more writing. Communication will bring more communication. Clarity will bring more clarity.&lt;/p&gt;

&lt;p&gt;Write.&lt;/p&gt;




&lt;p&gt;If you liked this post join my &lt;a href="https://www.schneems.com/mailinglist"&gt;free mailing list&lt;/a&gt; for more. &lt;/p&gt;

</description>
      <category>practices</category>
      <category>writing</category>
    </item>
    <item>
      <title>Coders Code</title>
      <dc:creator>Schneems</dc:creator>
      <pubDate>Fri, 26 May 2017 15:15:57 +0000</pubDate>
      <link>https://dev.to/schneems/coders-code</link>
      <guid>https://dev.to/schneems/coders-code</guid>
      <description>

&lt;p&gt;As truisms go, one of my favorites is "writers write". Many developers walk around pondering whether they are "real coders", or they ask "how can I be more senior". To them, I say "coders code". If someone is writing, then by definition they are a writer. It doesn't matter if they are J.K. Rowling or working a blog post. The act of writing creates a writer. The same is true of coding. If you're in QA or DevOps or Front End or Backend or spend your days hunting down missing semicolons in code-reviews, you're a coder. When you put your fingers to the keyboard at your editor of choice, even if it's not Emacs or Vim or *&amp;lt;latest hot editor here&amp;gt;*, you're still a coder. If your fingers never grace a keyboard and you drive a pair session or dictate text, you're a coder.&lt;/p&gt;

&lt;p&gt;What about when you're not in the act of generating code? Do you stop being a coder? I don't know what Stephen King is doing right now, maybe he's brushing his teeth, fixing some food, or going to the bathroom. Because he doesn't have a pen in hand or a typewriter under a finger, does it mean he's no longer a writer? Of course not. The phrase is "writers write" not "writers are only writers when they are actively writing". The same applies to coders - I've heard again and again from just about every programmer under the sun that the longer they spend coding, the less time they physically spend coding. Maybe they pick up a mentorship role or they work on documentation and code reviews. For people who are on a life-long technical track, they tend to be attracted to hard problems, and many are best solved by contemplation and careful thought instead of hours of guess and check at the keyboard. I've switched my showers from first thing in the morning to after I've worked a bit for the day (I work from home). I find when I'm stuck and my wheels are spinning, taking a step back to really think about the issue at hand while in a shower works miracles.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For a deeper dive on why exactly this trick might work check out the book &lt;a href="https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555"&gt;Thinking Fast and Slow&lt;/a&gt;. Also, did you know you can check out Kindle books from your local library?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;How does a writer get better at writing? They write. Ideally, they have some kind of a feedback loop, maybe a mentor, maybe they go to school and are given assignments, or they join some kind of writing retreat or club. At the end of the day though, the important part isn't the school or the criteria, it's the act of writing. If you're just getting started or want to make it to the next career milestone, might I recommend coding? Coding is the act, and the journey, and the goal. However, you can't just sit down and pound out code.&lt;/p&gt;

&lt;p&gt;A book needs a plot. Code needs a problem to solve. Before we can have a problem, we need to have a desire, a want. A problem is a thing that gets in the way of having our desires met. For this, I recommend finding a thing you want to build. As you work towards that goal, the act of coding will come naturally. Forget "what is the best language to learn" or "how do you get better at programming". Find a topic or subject that interests you and an idea that keeps you awake. When you find this, the code will come.&lt;/p&gt;

&lt;p&gt;If you don't have a passion project burning a hole in your brain you can cheat to find a muse. Some people are incentivized by a university or a Bootcamp. Maybe prowling a tag on Stack Overflow or signing up for a mailing list might give you inspiration. I run a free service called &lt;a href="https://www.codetriage.com"&gt;CodeTriage&lt;/a&gt; that helps coders find a muse for Open Source contributions, which is another treasure trove of code just waiting to be written.&lt;/p&gt;

&lt;p&gt;If you've been coding for a while and have no lack of opportunities, what then? Code for the job you want. Perhaps you want to specialize in writing Performant code, or at very least get better at it. Performant coders write Performant code. Find other coders who have written fast code, find out the why and the how. Find slow code (shouldn't be too hard) and some benchmarking tools to isolate the slow parts. Then try making it faster. Read performance blog posts and books. Ask questions, go down rabbit holes. When you get stuck, ask yourself what's stopping you from writing code. Remove those blockers, then get back to writing code.&lt;/p&gt;

&lt;p&gt;Now ask yourself, are you a coder? If you answered in anything but the affirmative, what's stopping you from claiming the title? You write code, right? That makes you a coder!&lt;/p&gt;

&lt;p&gt;What comes next? Get excited, get engaged, get coding!&lt;/p&gt;




&lt;p&gt;If you liked this post join my &lt;a href="http://schneems.us3.list-manage.com/subscribe?u=a9095027126a1cf15c5062160&amp;amp;id=17dc267687"&gt;free mailing list&lt;/a&gt; for more. &lt;/p&gt;


</description>
      <category>practices</category>
      <category>seniordeveloper</category>
    </item>
    <item>
      <title>Who Called Git? An Unusual Debugging Story</title>
      <dc:creator>Schneems</dc:creator>
      <pubDate>Mon, 28 Nov 2016 00:00:00 +0000</pubDate>
      <link>https://dev.to/schneems/who-called-git-an-unusual-debugging-story</link>
      <guid>https://dev.to/schneems/who-called-git-an-unusual-debugging-story</guid>
      <description>

</description>
      <category>ruby</category>
      <category>heroku</category>
      <category>cli</category>
    </item>
  </channel>
</rss>
