<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kmaschta</title>
    <description>The latest articles on DEV Community by Kmaschta (@kmaschta).</description>
    <link>https://dev.to/kmaschta</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kmaschta"/>
    <language>en</language>
    <item>
      <title>As a Team, How Do You Share Your Knowledge?</title>
      <dc:creator>Kmaschta</dc:creator>
      <pubDate>Tue, 26 Nov 2019 11:37:49 +0000</pubDate>
      <link>https://dev.to/kmaschta/as-a-team-how-do-you-share-your-knowledge-4a63</link>
      <guid>https://dev.to/kmaschta/as-a-team-how-do-you-share-your-knowledge-4a63</guid>
      <description>&lt;p&gt;Each engineer knowledge is endlessly valuable, and I'm searching for a way to make all those hours spent on problems profitable by disseminating them amongs a growing team of ~15 software engineers.&lt;/p&gt;

&lt;p&gt;We already have countless ways to do so:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;internal brown-bag lunches (BBL)&lt;/li&gt;
&lt;li&gt;a lot of &lt;a href="https://marmelab.com/blog/"&gt;blog posts&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;regular hackdays with a demo to the team at the end&lt;/li&gt;
&lt;li&gt;local meetups, etc&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But we needs to formalize this knowledge into a base that is easily searchable, discoverable, and easy to feed.&lt;/p&gt;

&lt;p&gt;We've heard of Stack Overflow for Teams, also considering to open a Notion.&lt;/p&gt;

&lt;p&gt;What do you recommend?&lt;/p&gt;

</description>
      <category>discuss</category>
    </item>
    <item>
      <title>Releasing Comfygure 1.0</title>
      <dc:creator>Kmaschta</dc:creator>
      <pubDate>Tue, 28 May 2019 16:20:36 +0000</pubDate>
      <link>https://dev.to/kmaschta/releasing-comfygure-1-0-46j3</link>
      <guid>https://dev.to/kmaschta/releasing-comfygure-1-0-46j3</guid>
      <description>&lt;p&gt;&lt;a href="https://marmelab.com/blog/2017/11/07/introducing-comfygure.html"&gt;Two years ago&lt;/a&gt;, we introduced an open-source configuration manager called &lt;strong&gt;comfygure&lt;/strong&gt;. Since then, we've been using this tool in production, and we've tweaked it so it's ready for prime time. Today, we are proud to announce that we released a stable version!&lt;/p&gt;

&lt;p&gt;Read on to see what is included in this version, how you can test it, and what's our plan for the future of this project.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The blog post was published and commented on Hacker News : &lt;a href="https://news.ycombinator.com/item?id=20029420"&gt;https://news.ycombinator.com/item?id=20029420&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What Is Comfygure?
&lt;/h2&gt;

&lt;p&gt;Let's take two minutes to explain what comfygure is and how it works. Not so long ago, I ran a small survey on how software engineers handle the deployment of their environments.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://kvinmaschtaler.typeform.com/to/KMXdh2"&gt;survey is still online&lt;/a&gt;, and you can see the results by following this link: &lt;a href="https://kvinmaschtaler.typeform.com/report/KMXdh2/A4g9saoaIc2vFfbl"&gt;Configuration Management Survey Results&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let me summarize what the participants answered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Software engineers and ops have full responsibility to trigger production deployment by themselves&lt;/li&gt;
&lt;li&gt;Most of the time, 2 to 5 team members can do so&lt;/li&gt;
&lt;li&gt;Deployments are automated, at least partially&lt;/li&gt;
&lt;li&gt;Each team tends to have 2 to 3 environments per application&lt;/li&gt;
&lt;li&gt;Environment variables, JSON files and YAML files are the favorites configuration formats&lt;/li&gt;
&lt;li&gt;There is no consensus on how to store these configurations&lt;/li&gt;
&lt;li&gt;Most answerers don't use configuration or secret managers but would like to&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also gave them a chance to say what they find frustrating about configuration management:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7q58tTj9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/C3ko1pT.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7q58tTj9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/C3ko1pT.jpg" alt="List of survey feedbacks about frustrating configuration management experiences"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There were not enough participants to extrapolate. However, I understand every one of the participants because we were in a similar situation: too early to have an utomated and full-featured configuration management, but too far to keep managing configurations by hand.&lt;/p&gt;

&lt;p&gt;This is the reason why &lt;a href="https://marmelab.com/blog/2017/11/07/introducing-comfygure.html"&gt;we decided to build a new solution&lt;/a&gt; that fulfilled our needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A small tool that could store, version, retrieve, and format our application configurations in order to keep sync between coworkers and environments.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LPmyLa1A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/jp5soSLg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LPmyLa1A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/jp5soSLg.png" alt="Comfygure example demo, with multiple commands"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's New In The Stable Release
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Quick Configuration Accessors
&lt;/h3&gt;

&lt;p&gt;This release introduces a new command (&lt;code&gt;comfy set&lt;/code&gt;) and improves the &lt;code&gt;comfy get&lt;/code&gt; command in order to let you read or update a &lt;em&gt;subset&lt;/em&gt; of a configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;comfy &lt;span class="nb"&gt;set &lt;/span&gt;production version &lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;comfy get production version
1.0.0

&lt;span class="nv"&gt;$ &lt;/span&gt;comfy &lt;span class="nb"&gt;set &lt;/span&gt;production flags.stable &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;comfy get production flags &lt;span class="nt"&gt;--json&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"stable"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You no longer need to fetch the configuration file, open it with your IDE, update that one flag and update the full version of the file. All of that is managed by &lt;code&gt;comfy set&lt;/code&gt;. It's particularly useful to quickly change feature flags or bump the project version.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keep Track Of Your Configuration History
&lt;/h3&gt;

&lt;p&gt;The previous version already had a &lt;code&gt;comfy log&lt;/code&gt; command, showing the latest changes in the configuration. The stable release fixes a few things about it. Also, it is now possible to retrieve a specific version of your configuration.&lt;/p&gt;

&lt;p&gt;Combined with tags, these commands will let you deploy a fixed version to your environment and rollback at the speed of light!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;comfy log production
2019-5-24 10:50:30  production  da70d7d69bb7f158748cf3ea76c08b9c4a12c3c0    latest
2019-5-24 10:50:19  production  ae30ab2567e7316cf5521d26b8c7a03ece0522d3    no tag
2019-5-24 10:50:10  production  964e51df37c0fe2a518998fb6457b461c4013d28    no tag

&lt;span class="nv"&gt;$ &lt;/span&gt;comfy get production version &lt;span class="nt"&gt;--hash&lt;/span&gt; 964e51df37c0fe2a518998fb6457b461c4013d28
0.1.4
&lt;span class="nv"&gt;$ &lt;/span&gt;comfy get productiion version &lt;span class="nt"&gt;--hash&lt;/span&gt; ae30ab2567e7316cf5521d26b8c7a03ece0522d3
1.0.0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You may also note that the tag &lt;code&gt;latest&lt;/code&gt; replaces the &lt;code&gt;stable&lt;/code&gt; and &lt;code&gt;next&lt;/code&gt; tags when your create a new environment. You are still able to create them manually by running &lt;code&gt;comfy tag add production stable &amp;lt;hash&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Host Your Own Configuration Store And Server
&lt;/h3&gt;

&lt;p&gt;Marmelab hosts the default comfygure server at &lt;code&gt;https://comfy.marmelab.com&lt;/code&gt;, for free. But some people don't want to let a third-party company store their configuration, even though it's a zero knowledge platform (the configuration is encrypted client-side), and this is perfectly fine.&lt;/p&gt;

&lt;p&gt;This is the reason why we focused on making it easy to host your own comfygure service for that release.&lt;/p&gt;

&lt;p&gt;To this end, we published &lt;a href="https://marmelab.com/comfygure/HostYourOwn.html"&gt;extensive documentation&lt;/a&gt; explaining how you can deploy comfygure on &lt;a href="https://zeit.co/now"&gt;ZEIT's now&lt;/a&gt;, for example, as well as a Docker image available on Docker Hub: &lt;a href="https://hub.docker.com/r/marmelab/comfygure"&gt;marmelab/comfygure&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We use &lt;a href="https://serverless.com/"&gt;serverless&lt;/a&gt; to deploy our own comfygure origin server. That means you can also deploy your own comfy server on &lt;a href="https://serverless.com/framework/docs/providers/"&gt;Amazon Web Services, Microsoft Azure, Google Cloud Platform, and many others&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Feel free to deploy your own comfygure server and play with it. The more comfy servers there are out there, the less this service is centralized. Our goal isn't to become the default host, just to make developers life a bit easier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9d7JZSAE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/ZsnLnuL.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9d7JZSAE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/ZsnLnuL.jpg" alt="marmelab/comfygure image on Docker Hub"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;The goal of this release was to implement the major breaking changes, and to publish a stable version. The goal of the 1.1 will be to make this tool easier to use.&lt;/p&gt;

&lt;p&gt;Installing comfy locally is easy, since all the configuration is stored in a single file. On a server or on a CI, it might be a bit cumbersome, since it requires no less than six environments variables. How ironic for a tool that wants to make configuration easier to deal with! So the first milestone of the next release is to make server installation fast and easy, and update the related documentation, with some real examples.&lt;/p&gt;

&lt;p&gt;Similarly, there is a small permission management system on comfy with read-only and admin tokens. This is not explicit enough and these tokens need to be editable. This will also be an objective of the next release.&lt;/p&gt;

&lt;p&gt;Also, I find pretty hard to navigate the configuration history. I'd love to have a &lt;code&gt;comfy diff &amp;lt;env&amp;gt; &amp;lt;tag|hash&amp;gt; &amp;lt;tag|hash&amp;gt;&lt;/code&gt; command that would show what changed between two versions! So, it is in the pipes.&lt;/p&gt;

&lt;p&gt;Finally and most importantly, I would love to hear what are you thinking of this small tool! Please give it a try and tell me what you think of it.&lt;/p&gt;

&lt;p&gt;Here are some links where you can find more details about it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/marmelab/comfygure"&gt;GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/comfygure"&gt;comfygure package on NPM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://marmelab.com/comfygure/"&gt;Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hub.docker.com/r/marmelab/comfygure"&gt;Docker Image&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://marmelab.com/blog/2017/11/07/introducing-comfygure.html"&gt;The previous blog post about comfy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://twitter.com/Kmaschta"&gt;My Twitter account @Kmaschta&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>webdev</category>
      <category>opensource</category>
      <category>showdev</category>
    </item>
    <item>
      <title>HTTPS In Development: A Practical Guide</title>
      <dc:creator>Kmaschta</dc:creator>
      <pubDate>Wed, 23 Jan 2019 21:50:50 +0000</pubDate>
      <link>https://dev.to/kmaschta/https-in-development-a-practical-guide-175m</link>
      <guid>https://dev.to/kmaschta/https-in-development-a-practical-guide-175m</guid>
      <description>&lt;p&gt;According to Firefox Telemetry, &lt;a href="https://letsencrypt.org/stats/#percent-pageloads" rel="noopener noreferrer"&gt;76% of web pages are loaded with HTTPS&lt;/a&gt;, and this number is growing.&lt;/p&gt;

&lt;p&gt;Sooner or later, software engineers have to deal with HTTPS, and the sooner the better. Keep reading to know why and how to serve a JavaScript application with HTTPS on your development environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://letsencrypt.org/stats/#percent-pageloads" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Ft9rf61wk3jchd3h02ylu.png" alt="HTTPS adoption according to Firefox Telemetry"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use HTTPS On a Development Environment?
&lt;/h2&gt;

&lt;p&gt;First, should you serve a website in production through HTTPS at all? Unless you really know what your are doing, &lt;a href="https://doesmysiteneedhttps.com" rel="noopener noreferrer"&gt;the default answer is &lt;strong&gt;yes&lt;/strong&gt;&lt;/a&gt;. It improves your website at so many levels: security, performance, SEO, and so on.&lt;/p&gt;

&lt;p&gt;How to setup HTTPS is often adressed during the first release, and brings a lot of other questions. Should traffic be encrypted from end to end, or is encryption until the reverse proxy enough? How should the certificate be generated? Where should it be stored? What about &lt;a href="https://developer.mozilla.org/fr/docs/S%C3%A9curit%C3%A9/HTTP_Strict_Transport_Security" rel="noopener noreferrer"&gt;HSTS&lt;/a&gt;?&lt;/p&gt;

&lt;p&gt;The development team should be able to answer these questions early. If you fail to do so, you might &lt;a href="https://nickcraver.com/blog/2017/05/22/https-on-stack-overflow/" rel="noopener noreferrer"&gt;end up like Stack Overflow wasting a lot of time&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Besides, having a development environment as close as possible from the production reduces risks that bugs reach the production environment, and also tend to decrease the time to debug those bugs. It's also true for end-to-end tests.&lt;/p&gt;

&lt;p&gt;In addition, there are features that only work on a page served by HTTPS, such as &lt;a href="https://developer.mozilla.org/fr/docs/Web/API/Service_Worker_API" rel="noopener noreferrer"&gt;Service Workers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But HTTPS is slow!&lt;/strong&gt; Many people believe that encryption is complicated and in a certain way must be slow to be efficient. But with modern hardware and protocols, &lt;a href="https://istlsfastyet.com/" rel="noopener noreferrer"&gt;this is not true anymore&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How To Generate A Valid Certificate For A Development Environment?
&lt;/h2&gt;

&lt;p&gt;For production systems, it's easy to get a TLS certificate: generate one from &lt;a href="https://letsencrypt.org/" rel="noopener noreferrer"&gt;Let's Encrypt&lt;/a&gt; or buy one from a paid provider.&lt;/p&gt;

&lt;p&gt;For the development environment, it seems trickier, but it isn't that hard.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mkcert: The No Brainer CLI
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://blog.filippo.io/hi/" rel="noopener noreferrer"&gt;Filippo Valsorda&lt;/a&gt; recently published &lt;a href="https://github.com/FiloSottile/mkcert" rel="noopener noreferrer"&gt;&lt;code&gt;mkcert&lt;/code&gt;&lt;/a&gt;, a simple cli to generate locally-trusted development certificates. You just have to run a one-line command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mkcert &lt;span class="nt"&gt;-install&lt;/span&gt;
mkcert example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fully supported certificate will be available where your ran the command, namely at &lt;code&gt;./example.com-key.pem&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manual Installation With OpenSSL
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;mkcert&lt;/code&gt; should fulfill all of your needs, unless you have to share the same certificate with your coworkers, or through other systems than your local env. In that case, you can generate your own certificate thanks to &lt;code&gt;openssl&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl req &lt;span class="nt"&gt;-x509&lt;/span&gt; &lt;span class="nt"&gt;-nodes&lt;/span&gt; &lt;span class="nt"&gt;-days&lt;/span&gt; 365 &lt;span class="nt"&gt;-newkey&lt;/span&gt; rsa:2048 &lt;span class="nt"&gt;-keyout&lt;/span&gt; server.key &lt;span class="nt"&gt;-out&lt;/span&gt; server.crt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The certificate (&lt;code&gt;server.crt&lt;/code&gt;) and its key (&lt;code&gt;server.key&lt;/code&gt;) will be valid but &lt;em&gt;self-signed&lt;/em&gt;. This certificate will be unknown to any &lt;a href="https://en.wikipedia.org/wiki/Certificate_authority" rel="noopener noreferrer"&gt;Certificate Authority&lt;/a&gt;. But all browsers ask well-known certificate authorities to validate certificates in order to accept encrypted connections. For a self-signed certificate, they can't validate it, so they display an annoying warning:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fw8rfscx7fp3ipg78r0xg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fw8rfscx7fp3ipg78r0xg.png" alt="Self-Signed Certificate Error"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can accept that inconvenience and manually ignore the warning each time it shows up. But it's very cumbersome, and it may block e2e tests in a CI environment. A better solution is to create your own local &lt;strong&gt;certificate authority&lt;/strong&gt;, add this custom authority to your browser and generate a certificate from it.&lt;/p&gt;

&lt;p&gt;That's what &lt;code&gt;mkcert&lt;/code&gt; does for you under the hood, but if you want to do it yourself, I wrote a gist that may help you: &lt;a href="https://gist.github.com/Kmaschta/205a67e42421e779edd3530a0efe5945" rel="noopener noreferrer"&gt;Kmaschta/205a67e42421e779edd3530a0efe5945&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  HTTPS From a Reverse Proxy Or A Third-Party App
&lt;/h2&gt;

&lt;p&gt;Usually, end-users don't directly reach the application server. Instead, user requests are handled by a load balancer or a reverse proxy that distributes requests across backends, stores the cache, protects from unwanted requests, and so on. It's not uncommon to see these proxies take the role of decrypting requests and encrypting responses as well.&lt;/p&gt;

&lt;p&gt;On a development environment, we can use a reverse proxy, too!&lt;/p&gt;

&lt;h3&gt;
  
  
  Encryption via Traefik and Docker Compose
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://traefik.io/" rel="noopener noreferrer"&gt;Traefik&lt;/a&gt; is a reverse proxy that comes with a lot of advantages for developers. Among others, it's simple to configure and it comes with a GUI. Also, there is an official docker image &lt;a href="https://hub.docker.com/_/traefik" rel="noopener noreferrer"&gt;available on docker hub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So, let's use it inside the &lt;code&gt;docker-compose.yml&lt;/code&gt; of a hypothetical application that only serves static files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.4'&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;reverse-proxy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;traefik&lt;/span&gt; &lt;span class="c1"&gt;# The official Traefik docker image&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;--docker --api&lt;/span&gt; &lt;span class="c1"&gt;# Enables the web UI and tells Traefik to listen to docker&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3000:443'&lt;/span&gt;  &lt;span class="c1"&gt;# Proxy entrypoint&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;8000:8080'&lt;/span&gt; &lt;span class="c1"&gt;# Dashboard&lt;/span&gt;
        &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock:/var/run/docker.sock&lt;/span&gt; &lt;span class="c1"&gt;# So that Traefik can listen to the Docker events&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./certs/server.crt:/sslcerts/server.crt&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./certs/server.key:/sslcerts/server.key&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./traefik.toml:/traefik.toml&lt;/span&gt; &lt;span class="c1"&gt;# Traefik configuration file (see below)&lt;/span&gt;
        &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;traefik.enable=false'&lt;/span&gt;
        &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;static-files&lt;/span&gt;
    &lt;span class="na"&gt;static-files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;halverneus/static-file-server&lt;/span&gt;
        &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./static:/web&lt;/span&gt;
        &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;traefik.enable=true'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;traefik.frontend.rule=Host:localhost'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;traefik.port=8080'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;traefik.protocol=http'&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8080:8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, our static file server listens on port 8080 and serves files in HTTP. This configuration tells Traefik to handle HTTPS requests to &lt;code&gt;https://localhost&lt;/code&gt; and proxy each of them to &lt;code&gt;http://localhost:8080&lt;/code&gt; in order to serve static files.&lt;/p&gt;

&lt;p&gt;We also have to add a &lt;code&gt;traefik.toml&lt;/code&gt; to configure the Traefik entry points:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="py"&gt;debug&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;false&lt;/span&gt;

&lt;span class="py"&gt;logLevel&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"ERROR"&lt;/span&gt;
&lt;span class="py"&gt;defaultEntryPoints&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;["https","http"]&lt;/span&gt;

&lt;span class="nn"&gt;[entryPoints]&lt;/span&gt;
  &lt;span class="nn"&gt;[entryPoints.http]&lt;/span&gt;
  &lt;span class="py"&gt;address&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;":80"&lt;/span&gt;
    &lt;span class="nn"&gt;[entryPoints.http.redirect]&lt;/span&gt;
    &lt;span class="py"&gt;entryPoint&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"https"&lt;/span&gt;
  &lt;span class="nn"&gt;[entryPoints.https]&lt;/span&gt;
  &lt;span class="py"&gt;address&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;":443"&lt;/span&gt;
  &lt;span class="nn"&gt;[entryPoints.https.tls]&lt;/span&gt;
      &lt;span class="nn"&gt;[[entryPoints.https.tls.certificates]&lt;/span&gt;&lt;span class="err"&gt;]&lt;/span&gt;
      &lt;span class="py"&gt;certFile&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"/sslcerts/server.crt"&lt;/span&gt;
      &lt;span class="py"&gt;keyFile&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"/sslcerts/server.key"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we have two entry points: &lt;code&gt;http&lt;/code&gt; and &lt;code&gt;https&lt;/code&gt;, listening respectively to ports 80 and 443. The first one redirects to the HTTPS, and the second is configured to encrypt requests thanks to the specified TLS certificates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fy8dboqxgihi10t5bw8gm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fy8dboqxgihi10t5bw8gm.png" alt="Traefik Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Encryption From Docker Compose via Nginx
&lt;/h3&gt;

&lt;p&gt;Obviously, we can do exactly the same with the popular Nginx reverse proxy. As Nginx can also directly serve static files itself, the setup is simpler. Again, the first step is the &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:alpine&lt;/span&gt;
        &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./static:/var/www&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./default.conf:/etc/nginx/conf.d/default.conf&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;../../certs/server.crt:/etc/nginx/conf.d/server.crt&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;../../certs/server.key:/etc/nginx/conf.d/server.key&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:443"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the nginx configuration at &lt;code&gt;default.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt; &lt;span class="s"&gt;default_server&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="s"&gt;[::]:80&lt;/span&gt; &lt;span class="s"&gt;default_server&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;301&lt;/span&gt; &lt;span class="s"&gt;https://&lt;/span&gt;&lt;span class="nv"&gt;$server_name$request_uri&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt; &lt;span class="s"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="p"&gt;~&lt;/span&gt;&lt;span class="sr"&gt;.;&lt;/span&gt;

    &lt;span class="s"&gt;ssl_certificate&lt;/span&gt; &lt;span class="n"&gt;/etc/nginx/conf.d/server.crt&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;ssl_certificate_key&lt;/span&gt; &lt;span class="n"&gt;/etc/nginx/conf.d/server.key&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;root&lt;/span&gt; &lt;span class="n"&gt;/var/www&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;## If the static server was another docker service,&lt;/span&gt;
    &lt;span class="c1"&gt;## It is possible to forward requests to its port:&lt;/span&gt;
    &lt;span class="c1"&gt;# location / {&lt;/span&gt;
    &lt;span class="c1"&gt;#     proxy_set_header Host $host;&lt;/span&gt;
    &lt;span class="c1"&gt;#     proxy_set_header X-Real-IP $remote_addr;&lt;/span&gt;
    &lt;span class="c1"&gt;#     proxy_pass http://web:3000/;&lt;/span&gt;
    &lt;span class="c1"&gt;# }&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Serving HTTPS Directly From The Application
&lt;/h2&gt;

&lt;p&gt;Sometimes security requirements demand end-to-end encryption, or having a reverse proxy just might seem to be overkill on a development environment. Most of the time, it's possible to serve HTTPS directly from your everyday development environment.&lt;/p&gt;

&lt;p&gt;Let's take the example of a common stack: a React application with a REST API using &lt;a href="http://expressjs.com/" rel="noopener noreferrer"&gt;express&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Create React App or Webpack Dev Server
&lt;/h3&gt;

&lt;p&gt;Your average React app is bootstraped by &lt;code&gt;create-react-app&lt;/code&gt;. This awesome tool comes with a lot of built-in features and can handle HTTPS out of the box. To do so, you just have to specify a &lt;code&gt;HTTPS=true&lt;/code&gt; environment variable when starting the app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;HTTPS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true &lt;/span&gt;npm run start
&lt;span class="c"&gt;# or&lt;/span&gt;
&lt;span class="nv"&gt;HTTPS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true &lt;/span&gt;yarn start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will serve your app from &lt;code&gt;https://localhost:3000&lt;/code&gt; instead of &lt;code&gt;http://localhost:3000&lt;/code&gt; with an auto-generated certificate. But it's a self-signed certificate, so the developer experience is poor. &lt;/p&gt;

&lt;p&gt;If you want to use your own HTTPS certificate (signed with an authority that your browser trusts), &lt;code&gt;create-react-app&lt;/code&gt; doesn't let you configure it without ejecting the app (&lt;code&gt;npm run eject&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EDIT:&lt;/strong&gt; A dev.to reader, &lt;a href="https://dev.to/zwerge/comment/8bkj"&gt;Zwerge, found a clever workaround&lt;/a&gt; to replace the default HTTPS certificate on the fly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"prestart"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"(cat ../../certs/server.crt ../../certs/server.key &amp;gt; ./node_modules/webpack-dev-server/ssl/server.pem) || :"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"start"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"react-scripts start"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fortunately, if you do eject CRA, or if your project is bundled with webpack, &lt;code&gt;webpack-dev-server&lt;/code&gt; is as straightforward as &lt;code&gt;create-react-app&lt;/code&gt; when it comes to serve HTTPS! It's possible to configure a custom HTTPS certificate with two lines in the Webpack configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;path&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;production&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="na"&gt;devServer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;__dirname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../certs/server.key&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
            &lt;span class="na"&gt;cert&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;__dirname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../certs/server.crt&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next time you'll run &lt;code&gt;webpack-dev-server&lt;/code&gt;, it will handle HTTPS requests to &lt;code&gt;https://localhost:3000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fbj5p96icraj81lnl7qre.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fbj5p96icraj81lnl7qre.png" alt="Example App - Static Site"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Encrypted HTTP/2 With Express And SPDY
&lt;/h3&gt;

&lt;p&gt;Now that we have our frontend part of the app that is served through HTTPS, we have to do the same with our backend.&lt;/p&gt;

&lt;p&gt;For this purpose, let's use &lt;a href="https://www.npmjs.com/package/express" rel="noopener noreferrer"&gt;express&lt;/a&gt; and &lt;a href="https://www.npmjs.com/package/spdy" rel="noopener noreferrer"&gt;spdy&lt;/a&gt;. No wonder why these two libraries names are about SPEED, it's because they are fast to setup!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;path&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;spdy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;spdy&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;CERTS_ROOT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../certs/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;static&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;static&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;cert&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;CERTS_ROOT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;server.crt&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;CERTS_ROOT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;server.key&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nx"&gt;spdy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;An error occured&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Server listening on https://localhost:3000.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;HTTP/2 isn't required to serve HTTPS, &lt;a href="https://stackoverflow.com/a/11745114/3868326" rel="noopener noreferrer"&gt;it's possible to serve encrypted content with HTTP first of the name&lt;/a&gt;, but while we are at serving HTTPS, we can upgrade the HTTP protocol. If you want to know more about the advantages of HTTP/2, you can read &lt;a href="https://http2.github.io/faq/" rel="noopener noreferrer"&gt;this quick FAQ&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Modern tooling allows to build applications that are safer and faster for end-users and, now, easy to bootstrap. I hope that I convinced you to use these libraries and technologies starting from your project inception, when they are still cheap to install.&lt;/p&gt;

&lt;p&gt;All the examples I used in this blog post are gathered on the following repo: &lt;a href="https://github.com/marmelab/https-on-dev" rel="noopener noreferrer"&gt;marmelab/https-on-dev&lt;/a&gt;. Feel free to play with and add your own HTTPS development experience!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>react</category>
    </item>
    <item>
      <title>How do you manage your web application configurations?</title>
      <dc:creator>Kmaschta</dc:creator>
      <pubDate>Mon, 12 Nov 2018 16:40:43 +0000</pubDate>
      <link>https://dev.to/kmaschta/how-do-you-manage-your-web-application-configurations-7al</link>
      <guid>https://dev.to/kmaschta/how-do-you-manage-your-web-application-configurations-7al</guid>
      <description>&lt;p&gt;In order to better understand current practices, I'm doing a survey about configuration management and deployment of web applications.&lt;/p&gt;

&lt;p&gt;If you are, like me, willing to learn how the community thrive or struggle with their configurations or just curious about deployment, I propose you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://kvinmaschtaler.typeform.com/to/KMXdh2" rel="noopener noreferrer"&gt;Fill the small survey&lt;/a&gt; (less than 5min)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kvinmaschtaler.typeform.com/report/KMXdh2/A4g9saoaIc2vFfbl" rel="noopener noreferrer"&gt;See the results live&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Debate how and why you are using technics and not another&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the end, I'll write a blog post about the conclusions and what I've learn about that, trying to be neutral.&lt;/p&gt;

&lt;p&gt;So, environment variables or YAML?&lt;br&gt;
How do you share configurations between coworkers?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>webdev</category>
      <category>devops</category>
      <category>deployment</category>
    </item>
    <item>
      <title>Site Reliability Engineering: Google's Secret Sauce For High Availability And Happy Ops</title>
      <dc:creator>Kmaschta</dc:creator>
      <pubDate>Tue, 29 May 2018 14:29:43 +0000</pubDate>
      <link>https://dev.to/kmaschta/site-reliability-engineering-googles-secret-sauce-for-high-availability-and-happy-ops-34li</link>
      <guid>https://dev.to/kmaschta/site-reliability-engineering-googles-secret-sauce-for-high-availability-and-happy-ops-34li</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Hope is not a strategy&lt;/em&gt;&lt;br&gt;
— Traditional SRE saying&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I've read the book &lt;em&gt;Site Reliability Engineering - How Google Runs Production Systems&lt;/em&gt;. I learned a lot, and I took away many good practices to apply to our own services. Here is the gist, and what I've learned from it. I'll focus on what web developers can learn from this SRE thing, without entering in the complexity of the Google's infrastructure.&lt;/p&gt;

&lt;p&gt;One quick disclaimer before diving in: at &lt;a href="https://marmelab.com/" rel="noopener noreferrer"&gt;Marmelab&lt;/a&gt;, we don't run our customers' services in production. Our expertise is eliminating uncertainties through agile iterations, so we usually delegate hosting to a partner company. That said, our job can't be disconnected from the production. We are responsible for the quality of the delivered software, and that includes making software that reports quality of service problems, and an architecture that make it resilient and performant. So even though it's not our job per se, the operation of web services interests me a lot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://landing.google.com/sre/book.html" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmarmelab.com%2Fimages%2Fblog%2Fsre-book.png" alt="Site Reliability Engineering - How Google Runs Production Systems"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Site Reliability Engineering (SRE)?
&lt;/h2&gt;

&lt;p&gt;First and foremost, let's define this emerging profession invented by Ben Treynor Sloss, Senior VP of Engineering at Google. From his own words:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Fundamentally, SRE is what happens when you ask software engineers to design an operations function.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A typical SRE team is formed of 6 to 8 engineers, in order to keep a balanced on-call rotation. Half of it is composed of traditional software engineers (SWE in Google's parlance), and the other half of engineers who are almost qualified to be SWE, but who have skills and interests related to operational fields: Unix internals, networking, and so on.&lt;/p&gt;

&lt;p&gt;They occupy a central position in the company's workflow. They are in relation with software engineers, release managers, data center engineers, product owners, accounting service, and upper management.&lt;/p&gt;

&lt;p&gt;They are fully &lt;strong&gt;responsible for the availability, latency, performance, efficiency, charge management, monitoring, emergency response, and capacity planning of a given application&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In order to face these huge responsibilities and hard scaling challenges, they carefully manage their time. They do it in an unusual way: &lt;strong&gt;they must devote less than 50% of their time to operational tasks, toil, and emergency response&lt;/strong&gt;. Most of their time should be spent writing software and tools that automate their own job, or making sure that their application heals itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevOps vs. SRE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DevOps is a movement initiated in 2008-2009 by &lt;a href="https://twitter.com/patrickdebois" rel="noopener noreferrer"&gt;Patrick Debois&lt;/a&gt; to promote agility in the deployment process, and to reduce the gap between developers and ops.&lt;/p&gt;

&lt;p&gt;SRE is rather an engineering field, a way to organize engineers in order to manage reliability as a whole. As explained &lt;a href="https://landing.google.com/sre/book/chapters/introduction.html#devops-or-sre-8OS8HmcX" rel="noopener noreferrer"&gt;page 7 of the SRE book&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;One could equivalently view SRE as a specific implementation of DevOps with some idiosyncratic extensions.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/zjF9aoAIrrjCE/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/zjF9aoAIrrjCE/giphy.gif" alt="Balancing Is Not That Easy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Balancing Availability and Velocity
&lt;/h2&gt;

&lt;p&gt;I expected this book to teach me Google's secret path to the "Always Available System™". I was wrong, but I learned a far more valuable lesson. Google engineers do not focus on 100% availability, because it's unrealistic and unsustainable.&lt;/p&gt;

&lt;p&gt;Instead, according to the business requirements and expectations, each application or production system gets a Service Level Objective (SLO). For a typical Google app, the SLO is set around 99.99% ("four nines" availability), or 99.999% ("five nines" availability) of successful requests rate over all requests.&lt;/p&gt;

&lt;p&gt;Here is &lt;a href="https://en.wikipedia.org/wiki/High_availability#Percentage_calculation" rel="noopener noreferrer"&gt;how the number of nines translates to downtime per year&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;99% ("two nines"): 3.65 days&lt;/li&gt;
&lt;li&gt;99.9% ("three nines"): 8.77 hours&lt;/li&gt;
&lt;li&gt;99.99% ("four nines"): 52.60 minutes&lt;/li&gt;
&lt;li&gt;99.999% ("five nines"): 5.26 minutes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At marmelab, most of our projects have a 99.9% uptime (three nines, not that bad!), even though we focus far more on velocity than availability. One important takeway from the SRE book is that, if we ever wanted to reach the next service level grade (four nines), we'd have to increase our efforts on availability &lt;strong&gt;ten times&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The difference between 100% and "x nines" availability" is important because it provides a measurable room for maneuver. For example, if the unavailability is measured in terms of unplanned downtime and the SLO is set to 99.9%, over a year it is considered acceptable to have about eight hours of downtime.&lt;/p&gt;

&lt;p&gt;This relatively thin margin of acceptable and unplanned downtime is a way to measure the risk induced by the velocity. If the availability decreases, and gets closer to the SLO, the SRE team should slow down the feature delivery, and focus on stabilizing the app. On the contrary, if the availability is far above the SLO, then the team has some margin, and can go ahead and push new features to prod.&lt;/p&gt;

&lt;p&gt;Of course, the Service Level Objective can change over time. It should be discussed among all the stakeholders of the project in regard to all the circumstances. It is a valuable metric to have.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2ATKt92huSBbSnbRNuAVTx_A.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1600%2F1%2ATKt92huSBbSnbRNuAVTx_A.jpeg" alt="Automate ALL The Things"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Eliminating Toil
&lt;/h2&gt;

&lt;p&gt;Since SRE shouldn't devote more than 50% of their time to operational tasks, they regularly have to automate their work as their application scales. They should focus on reducing &lt;strong&gt;toil&lt;/strong&gt;, defined as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;a manual, repetitive and automatable task that is mandatory to the proper functioning of the app and that grows with it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Automating toil can be a non-negligible investment, but it comes with several advantages. Of course, it saves engineer's time. But it also leads engineers to focus on a task at the right time (not when the service is broken), in a less error-prone way. Moreover, it reduces context switching, allows everyone to take care of serious issues and focus on what really matters.&lt;/p&gt;

&lt;p&gt;As developers, we also have to take care of toil every day - it just takes another form. &lt;strong&gt;Manual tests&lt;/strong&gt; are the most common toil, that's why we write automated tests. The second most common toil is &lt;strong&gt;deployment and release management&lt;/strong&gt; in general. Continuous Delivery is a luxury that I definitely recommend! Other toils are more subtle: &lt;strong&gt;updating dependencies&lt;/strong&gt; that don't have breaking changes, &lt;strong&gt;benchmarking performance&lt;/strong&gt;, &lt;strong&gt;auditing security vulnerabilities&lt;/strong&gt;, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Eliminating toil increases the ability to scale and makes the application nicer to play with.&lt;/strong&gt; It's up to you to find it and automate it! The best way is to include this routine as a habit on a daily basis, just like any other development practice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpbs.twimg.com%2Fmedia%2FCVkAep3UkAAXUXT.jpg%3Alarge" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpbs.twimg.com%2Fmedia%2FCVkAep3UkAAXUXT.jpg%3Alarge" alt="Shame nun of Game of Thrones"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Blameless Culture of Retrospective
&lt;/h2&gt;

&lt;p&gt;SRE has a culture of continuous improvement (what agile methodologies translate as the &lt;em&gt;retrospective&lt;/em&gt;) in the form of &lt;strong&gt;postmortems&lt;/strong&gt;. Every time an issue or an accident is big enough or is user-facing, the on-call engineer writes a postmortem, another engineer reviews it and publishes it to the whole team.&lt;/p&gt;

&lt;p&gt;Postmortems are extremely useful to keep track of incidents, plan further actions, and avoid repeating mistakes. But SRE teams go beyond that: they write postmortems &lt;strong&gt;blamelessly&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Each postmortem focuses on facts and root causes analysis. It assumes that everyone had good intentions, and did the right actions with the pieces of information they had. All finger pointing and shaming are explicitly banned. Moreover, the management doesn't punish mistakes. Instead, they take postmortems as an opportunity to improve the robustness of their systems.&lt;/p&gt;

&lt;p&gt;It's a hard thing to do, but removing blame and feels from retrospectives (and in this case postmortems) gives the confidence to escalate the real issues. Moreover, most system failures are due to a human mistake.&lt;/p&gt;

&lt;p&gt;It was refreshing to read that from a book by Google engineers. It's a good example of how agile principles can be put in practice, beyond the development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This book is full of insights and I wrote about only a very small portion of them. Among other technical tips like "divide and conquer" or "simplicity by design", it also teaches how Google scales human interactions among its employees.&lt;/p&gt;

&lt;p&gt;I warmly recommend &lt;em&gt;Site Reliability Engineering&lt;/em&gt; to anyone who is interested in production scaling and DevOps, of course, especially the &lt;em&gt;Practices&lt;/em&gt; part, which is full of practical stuff. &lt;/p&gt;

&lt;p&gt;The whole book is a must for all engineering managers. The fourth part, &lt;em&gt;Management&lt;/em&gt;, gives valuable insights on how to deal with interrupts, handle or recover from operational overload, and so on.&lt;/p&gt;

&lt;p&gt;But I also recommend reading the &lt;em&gt;Principles&lt;/em&gt; part to every product owner and agile guru, to get a better understanding of how ops teams work and collaborate.&lt;/p&gt;

&lt;p&gt;The book is modular and easily accessible. You can read just a chapter of it without having to understand how the Google infrastructure works.&lt;/p&gt;

&lt;p&gt;The SRE book is &lt;a href="https://landing.google.com/sre/book.html" rel="noopener noreferrer"&gt;free to read online&lt;/a&gt;, or you can find it on &lt;a href="http://amzn.eu/8N8FRhV" rel="noopener noreferrer"&gt;Amazon&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Site Reliability Engineering is a new and very interesting field, and you might expect to read more about it on this very blog.&lt;/p&gt;

&lt;p&gt;Further reading about SRE:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://hackernoon.com/so-you-want-to-be-an-sre-34e832357a8c" rel="noopener noreferrer"&gt;So you want to be an SRE?&lt;/a&gt; by Krishelle Hardson-Hurley&lt;/li&gt;
&lt;li&gt;&lt;a href="https://landing.google.com/sre/resources.html" rel="noopener noreferrer"&gt;Google SRE resources&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Image credit: &lt;a href="https://www.utilityclick.com/google-energy/" rel="noopener noreferrer"&gt;Google's Finland Data Center&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>bookreview</category>
      <category>performance</category>
    </item>
    <item>
      <title>Finding And Fixing Node.js Memory Leaks: A Practical Guide</title>
      <dc:creator>Kmaschta</dc:creator>
      <pubDate>Mon, 16 Apr 2018 11:54:46 +0000</pubDate>
      <link>https://dev.to/kmaschta/finding-and-fixing-nodejs-memory-leaks-a-practical-guide-3f5a</link>
      <guid>https://dev.to/kmaschta/finding-and-fixing-nodejs-memory-leaks-a-practical-guide-3f5a</guid>
      <description>&lt;p&gt;Fixing memory leaks may not be not the shiniest skill on a CV, but when things go wrong on production, it's better to be prepared!&lt;/p&gt;

&lt;p&gt;After reading this article, you'll be able to monitor, understand, and debug the memory consumption of a Node.js application.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Memory Leaks Become A Problem
&lt;/h2&gt;

&lt;p&gt;Memory leaks often go unnoticed. They become a problem when someone pays extra attention to the production performance metrics.&lt;/p&gt;

&lt;p&gt;The first symptom of a memory leak on a production application is that memory, CPU usage, and the load average of the host machine increase over time, without any apparent reason.&lt;/p&gt;

&lt;p&gt;Insidiously, the response time becomes higher and higher, until a point when the CPU usage reaches 100%, and the application stops responding. When the memory is full, and there is not enough swap left, the server can even fail to accept SSH connections.&lt;/p&gt;

&lt;p&gt;But when the application is restarted, all the issues magically vanish! And nobody understands what happened, so they move on other priorities, but the problem repeats itself periodically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://marmelab.com/images/blog/memory/response-time-over-time.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmarmelab.com%2Fimages%2Fblog%2Fmemory%2Fresponse-time-over-time.png" alt="NewRelic graph of a leak going full retard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Memory leaks aren't always that obvious, but when this pattern appears, it's time to look for a correlation between the memory usage and the response time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://marmelab.com/images/blog/memory/memory-usage-over-time.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmarmelab.com%2Fimages%2Fblog%2Fmemory%2Fmemory-usage-over-time.png" alt="NewRelic graph of a leak going full retard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations! You've found a memory leak. Now the fun begins for you.&lt;/p&gt;

&lt;p&gt;Needless to say, I assumed that you monitor  your server. Otherwise, I highly recommend taking a look at &lt;a href="https://newrelic.com/" rel="noopener noreferrer"&gt;New Relic&lt;/a&gt;, &lt;a href="https://www.elastic.co/solutions/apm" rel="noopener noreferrer"&gt;Elastic APM&lt;/a&gt;, or any monitoring solution. What can't be measured can't be fixed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Restart Before It's Too Late
&lt;/h2&gt;

&lt;p&gt;Finding and fixing a memory leak in Node.js takes time - usually a day or more. If your backlog can't accomodate some time to investigate the leak in the near future, I advise to look for a temporary solution, and deal with the root cause later. A rational way (in the short term) to postpone the problem is to restart the application before it reaches the critical bloat.&lt;/p&gt;

&lt;p&gt;For &lt;a href="http://pm2.keymetrics.io/" rel="noopener noreferrer"&gt;PM2&lt;/a&gt; users, the &lt;a href="http://pm2.keymetrics.io/docs/usage/process-management/#max-memory-restart" rel="noopener noreferrer"&gt;&lt;code&gt;max_memory_restart&lt;/code&gt;&lt;/a&gt; option is available to automatically restart node processes when they reach a certain amount of memory.&lt;/p&gt;

&lt;p&gt;Now that we're comfortably seated, with a cup of tea and a few hours ahead, let's dig into the tools that'll help you find these little RAM squatters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating An Effective Test Environment
&lt;/h2&gt;

&lt;p&gt;Before measuring anything, do yourself a favor, and take the time to set up a proper test environment. It can be a Virtual Machine, or an AWS EC2 instance, but it needs to repeat the exact same conditions as in production.&lt;/p&gt;

&lt;p&gt;The code should be built, optimized, and configured the exact same way as when it runs on production in order to reproduce the leak identically. Ideally, it's better to use the same &lt;a href="///blog/2018/01/22/configurable-artifact-in-deployment.html"&gt;deployment artifact&lt;/a&gt;, so you can be certain that there is no difference between the production and the new test environment.&lt;/p&gt;

&lt;p&gt;A duly configured test environment is not enough: it should run the same load as the production, too. To this end, feel free to grab production logs, and send the same requests to the test environment. During my debugging quest, I discovered &lt;a href="https://www.joedog.org/siege-home/" rel="noopener noreferrer"&gt;siege&lt;/a&gt; &lt;em&gt;an HTTP/FTP load tester and benchmarking utility&lt;/em&gt;, pretty useful when it comes to measuring memory under heavy load.&lt;/p&gt;

&lt;p&gt;Also, resist the urge to enable developer tools or verbose loggers if they are not necessary, otherwise &lt;a href="https://github.com/bithavoc/express-winston/pull/164" rel="noopener noreferrer"&gt;you'll end up debugging these dev tools&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  Accessing Node.js Memory Using V8 Inspector &amp;amp; Chrome Dev Tools
&lt;/h2&gt;

&lt;p&gt;I love the Chrome Dev Tools. &lt;code&gt;F12&lt;/code&gt; is the key that I type the most after &lt;code&gt;Ctrl+C&lt;/code&gt; and &lt;code&gt;Ctrl+V&lt;/code&gt; (because I mostly do Stack Overflow-Driven Development - just kidding).&lt;/p&gt;

&lt;p&gt;Did you know that you can use the same Dev Tools to inspect Node.js applications? Node.js and Chrome run the same engine, &lt;a href="https://developers.google.com/v8/" rel="noopener noreferrer"&gt;&lt;code&gt;Chrome V8&lt;/code&gt;&lt;/a&gt;, which contains the inspector used by the Dev Tools.&lt;/p&gt;

&lt;p&gt;For educational purposes, let's say that we have the simplest HTTP server ever, with the only purpose to display all the requests that it has ever received:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;requestLogs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createServer&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;requestLogs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;date&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;end&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;requestLogs&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Server listening to port 3000. Press Ctrl+C to stop it.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to expose the inspector, let's run Node.js with the &lt;code&gt;--inspect&lt;/code&gt; flag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;node &lt;span class="nt"&gt;--inspect&lt;/span&gt; index.js 
Debugger listening on ws://127.0.0.1:9229/655aa7fe-a557-457c-9204-fb9abfe26b0f
For &lt;span class="nb"&gt;help &lt;/span&gt;see https://nodejs.org/en/docs/inspector
Server listening to port 3000. Press Ctrl+C to stop it.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, run Chrome (or Chromium), and go to the following URI: &lt;code&gt;chrome://inspect&lt;/code&gt;. Voila! A full-featured debugger for your Node.js application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://marmelab.com/images/blog/memory/chrome-devtools.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmarmelab.com%2Fimages%2Fblog%2Fmemory%2Fchrome-devtools.png" alt="Chrome Dev Tools"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Taking Snapshots Of The V8 Memory
&lt;/h2&gt;

&lt;p&gt;Let's play with the &lt;em&gt;Memory&lt;/em&gt; tab a bit. The simplest option available is &lt;em&gt;Take heap snapshot&lt;/em&gt;. It does what you expect: it creates a dump of the heap memory for the inspected application, with a lot of details about the memory usage.&lt;/p&gt;

&lt;p&gt;Memory snapshots are useful to track memory leaks. A usual technique consists of comparing multiple snapshots at different key points to see if the memory size grows, when it does, and how.&lt;/p&gt;

&lt;p&gt;For example, we'll take three snapshots: one after the server start, one after 30 seconds of load, and the last one after another session of load.&lt;/p&gt;

&lt;p&gt;To simulate the load, I'll use the &lt;code&gt;siege&lt;/code&gt; utility introduced above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;timeout &lt;/span&gt;30s siege http://localhost:3000

&lt;span class="k"&gt;**&lt;/span&gt; SIEGE 4.0.2          
&lt;span class="k"&gt;**&lt;/span&gt; Preparing 25 concurrent &lt;span class="nb"&gt;users &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;battle.
The server is now under siege...
Lifting the server siege...
Transactions:               2682 hits
Availability:             100.00 %
Elapsed &lt;span class="nb"&gt;time&lt;/span&gt;:              30.00 secs
Data transferred:         192.18 MB
Response &lt;span class="nb"&gt;time&lt;/span&gt;:              0.01 secs
Transaction rate:          89.40 trans/sec
Throughput:             6.41 MB/sec
Concurrency:                0.71
Successful transactions:        2682
Failed transactions:               0
Longest transaction:            0.03
Shortest transaction:           0.00
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the result of my simulation (click to see the full size):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://marmelab.com/images/blog/memory/snapshots-comparison.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmarmelab.com%2Fimages%2Fblog%2Fmemory%2Fsnapshots-comparison.png" alt="Heap Snapshots Comparison"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A lot to see!&lt;/p&gt;

&lt;p&gt;On the first snapshot, there are already 5MB allocated before any request is processed. It's totally expected: each variable or imported module is injected into memory. Analysing the first snapshot allows optimizing the server start for example - but that's not our current task.&lt;/p&gt;

&lt;p&gt;What interests me here is to know if the server memory grows over time while it's used. As you can see, the third snapshot has 6.7MB while the second has 6.2MB: in the interval, some memory has been allocated. But which function did?&lt;/p&gt;

&lt;p&gt;I can compare the difference of allocated objects by clicking on the latest snapshot (1), change the mode for &lt;em&gt;Comparison&lt;/em&gt; (2), and select the Snapshot to compare with (3). This is the state of the current image.&lt;/p&gt;

&lt;p&gt;Exactly 2,682 &lt;code&gt;Date&lt;/code&gt; objects and 2,682 &lt;code&gt;Objects&lt;/code&gt; have been allocated between the two load sessions. Unsurprisingly, 2,682 requests have been made by siege to the server: it's a huge indicator that we have one allocation per request. But all "leaks" aren't that obvious so the inspector shows you where it was allocated: in the &lt;code&gt;requestLogs&lt;/code&gt; variable in the system Context (it's the root scope of the app).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: It's normal that V8 allocates memory for new objects. JavaScript is a garbage-collected runtime, so the V8 engine frees up memory at regular intervals. What's not normal is when it doesn't collect the allocated memory after a few seconds. &lt;/p&gt;

&lt;h2&gt;
  
  
  Watching Memory Allocation In Real Time
&lt;/h2&gt;

&lt;p&gt;Another method to measure the memory allocation is to see it live instead of taking multiple snapshots. To do so, click on &lt;em&gt;Record allocation timeline&lt;/em&gt; while the siege simulation is in progress.&lt;/p&gt;

&lt;p&gt;For the following example, I started siege after 5 seconds, and during 10 seconds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://marmelab.com/images/blog/memory/allocation-timeline.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmarmelab.com%2Fimages%2Fblog%2Fmemory%2Fallocation-timeline.png" alt="Heap Allocation Timeline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the firsts requests, you can see a visible spike of allocation. It's related to the HTTP module initialization. But if you zoom in to the more common allocation (such as on the image above) you'll notice that, again, it's the dates and objects that take the most memory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using The Heap Dump Npm Package
&lt;/h2&gt;

&lt;p&gt;An alternative method to get a heap snapshot is to use the &lt;a href="https://www.npmjs.com/package/heapdump" rel="noopener noreferrer"&gt;heapdump&lt;/a&gt; module. Its usage is pretty simple: once the module is imported, you can either call the &lt;code&gt;writeSnapshot&lt;/code&gt; method, or send a &lt;a href="https://en.wikipedia.org/wiki/Signal_(IPC)" rel="noopener noreferrer"&gt;SIGUSR2 signal&lt;/a&gt; to the Node process.&lt;/p&gt;

&lt;p&gt;Just update the app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;heapdump&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;heapdump&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;requestLogs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createServer&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/heapdump&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;heapdump&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeSnapshot&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Heap dump written to&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;requestLogs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;date&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;end&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;requestLogs&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Server listening to port 3000. Press Ctrl+C to stop it.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Heapdump enabled. Run "kill -USR2 &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pid&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;" or send a request to "/heapdump" to generate a heapdump.`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And trigger a dump:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;node index.js
Server listening to port 3000. Press Ctrl+C to stop it.
Heapdump enabled. Run &lt;span class="s2"&gt;"kill -USR2 29431"&lt;/span&gt; or send a request to &lt;span class="s2"&gt;"/heapdump"&lt;/span&gt; to generate a heapdump.

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;kill&lt;/span&gt; &lt;span class="nt"&gt;-USR2&lt;/span&gt; 29431
&lt;span class="nv"&gt;$ &lt;/span&gt;curl http://localhost:3000/heapdump
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls
&lt;/span&gt;heapdump-31208326.300922.heapsnapshot
heapdump-31216569.978846.heapsnapshot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll note that running &lt;code&gt;kill -USR2&lt;/code&gt; doesn't actually kill the process. The &lt;code&gt;kill&lt;/code&gt; command, despite its scary name, is just a tool to send signals to processes, by default a &lt;code&gt;SIGTERM&lt;/code&gt;. With the argument &lt;code&gt;-USR2&lt;/code&gt;, I choose to send a &lt;code&gt;SIGUSR2&lt;/code&gt; signal instead, which is a user-defined signal.&lt;/p&gt;

&lt;p&gt;In last resort, you can use the signal method to generate a heapdump on the production instance. But you need to know that creating a heap snapshot requires twice the size of the heap at the time of the snapshot.&lt;/p&gt;

&lt;p&gt;Once the snapshot is available, you can read it with the Chrome DevTools. Just open the Memory tab, right-click on the side and select &lt;em&gt;Load&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://marmelab.com/images/blog/memory/load-heap-snapshot.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmarmelab.com%2Fimages%2Fblog%2Fmemory%2Fload-heap-snapshot.png" alt="Load a Heap Snapshot into the Chrome Inspector"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Fixing the Leak
&lt;/h2&gt;

&lt;p&gt;Now that I have identified what grows the memory heap, I have to find a solution. For my example, the solution is to store the logs not in memory, but on the filesystem. On a real project, it's better to delegate log storage to another service like syslog, or use an appropriate storage like a database, a Redis instance, or whatever.&lt;/p&gt;

&lt;p&gt;Here is the modified web server with no more memory leak:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Not the best implementation. Do not try this at home.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;filename&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./requests.json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;readRequests&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;writeRequest&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;requests&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;readRequests&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="nx"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;date&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createServer&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;writeRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;end&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;readRequests&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Server listening to port 3000. Press Ctrl+C to stop it.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's run the same test scenario as before, and measure the outcome:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;timeout &lt;/span&gt;30s siege http://localhost:3000

&lt;span class="k"&gt;**&lt;/span&gt; SIEGE 4.0.2
&lt;span class="k"&gt;**&lt;/span&gt; Preparing 25 concurrent &lt;span class="nb"&gt;users &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;battle.
The server is now under siege...
Lifting the server siege...
Transactions:               1931 hits
Availability:             100.00 %
Elapsed &lt;span class="nb"&gt;time&lt;/span&gt;:              30.00 secs
Data transferred:        1065.68 MB
Response &lt;span class="nb"&gt;time&lt;/span&gt;:              0.14 secs
Transaction rate:          64.37 trans/sec
Throughput:            35.52 MB/sec
Concurrency:                9.10
Successful transactions:        1931
Failed transactions:               0
Longest transaction:            0.38
Shortest transaction:           0.01
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://marmelab.com/images/blog/memory/fixed-memory-usage.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmarmelab.com%2Fimages%2Fblog%2Fmemory%2Ffixed-memory-usage.png" alt="Fixed Memory Usage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the memory growth is far slower! This is because we no longer store the request logs in memory (inside the &lt;code&gt;requestLogs&lt;/code&gt; variable) for each request.&lt;/p&gt;

&lt;p&gt;This said, the API takes more time to respond: I had 89.40 transactions per second, now we have 64.37.&lt;br&gt;
Reading and writing to the disk comes with a cost, so do other API calls or database requests.&lt;/p&gt;

&lt;p&gt;Note that it's important to measure memory consumption before and after a potential fix, in order to confirm (and prove) that the memory issue is fixed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Actually, fixing a memory leak once it's been identified is somewhat easy: use well known and tested libraries, don't copy or store heavy objects for too long, and so on.&lt;/p&gt;

&lt;p&gt;The hardest part is to find them. Fortunately, and &lt;a href="https://github.com/nodejs/node/issues/18759" rel="noopener noreferrer"&gt;despite few bugs&lt;/a&gt;, the current Node.js tools are neat. And now you know how to use them!&lt;/p&gt;

&lt;p&gt;To keep this article short and understandable, I didn't mention some other tools like the &lt;a href="https://www.npmjs.com/package/memwatch" rel="noopener noreferrer"&gt;memwatch&lt;/a&gt; module (easy) or Core Dump analysis with &lt;code&gt;llnode&lt;/code&gt; or &lt;code&gt;mdb&lt;/code&gt; (advanced) but I let you with more detailed readings about them:&lt;/p&gt;

&lt;p&gt;Further reading:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.toptal.com/nodejs/debugging-memory-leaks-node-js-applications" rel="noopener noreferrer"&gt;Debugging Memory Leaks in Node.js Applications&lt;/a&gt; by Vladyslav Millier&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://blog.codeship.com/understanding-garbage-collection-in-node-js/" rel="noopener noreferrer"&gt;Understanding Garbage Collection and Hunting Memory Leaks in Node.js&lt;/a&gt; by Daniel Khan&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://www.brendangregg.com/blog/2016-07-13/llnode-nodejs-memory-leak-analysis.html" rel="noopener noreferrer"&gt;llnode for Node.js Memory Leak Analysis&lt;/a&gt; by Brendan Gregg&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.reaktor.com/blog/debugging-node-js-applications-using-core-dumps/" rel="noopener noreferrer"&gt;Debugging Node.js applications using core dumps&lt;/a&gt; by Antti Risteli&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>node</category>
      <category>javascript</category>
      <category>performance</category>
      <category>devops</category>
    </item>
    <item>
      <title>Configurable Artifacts: How To Deploy Like a Pro</title>
      <dc:creator>Kmaschta</dc:creator>
      <pubDate>Tue, 23 Jan 2018 13:22:54 +0000</pubDate>
      <link>https://dev.to/kmaschta/configurable-artifacts-how-to-deploy-like-a-pro-4fhk</link>
      <guid>https://dev.to/kmaschta/configurable-artifacts-how-to-deploy-like-a-pro-4fhk</guid>
      <description>&lt;p&gt;When a project team grows, feature deployments become more frequent. Automating these deployments then becomes critical to optimize the development workflow. In my opinion, the best practice is artifact-based deployment, a lifesaver process that I use as much as possible. It's quite popular, and part of the &lt;a href="https://12factor.net/config" rel="noopener noreferrer"&gt;The Twelve-Factor App&lt;/a&gt; pattern.&lt;/p&gt;

&lt;p&gt;This article illustrates artifact-based deployment in simple terms, through a practical example.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is A Deployment Artifact
&lt;/h2&gt;

&lt;p&gt;Artifacts aren't a new thing, but like every good practice, it's better when written down.&lt;/p&gt;

&lt;p&gt;A deployment artifact (or a &lt;code&gt;build&lt;/code&gt;) is the application code as it runs on production: compiled, built, bundled, minified, optimized, and so on. Most often, it's a single binary, or a bunch of files compressed in an archive.&lt;/p&gt;

&lt;p&gt;In such a state, you can store and version an artifact. The ultimate goal of an artifact is to be downloaded as fast as possible on a server, and run immediately, with no service interruption.&lt;/p&gt;

&lt;p&gt;Also, an artifact should be configurable in order to be deployable on any environment. For example, if you need to deploy to staging and production servers, you should be able to use the same artifact.&lt;/p&gt;

&lt;p&gt;Yes, you read that right: Only the configuration must change, not the artifact itself. It can seem harmful or difficult, but it's the main feature of a deployment artifact. If you have to build twice your artifact for two environments, you are missing the whole point.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Example Project: a Basic Proxy
&lt;/h2&gt;

&lt;p&gt;Let's take an example project to deploy. I'll write a basic HTTP proxy, adding a random HTTP header to every request, in Node.js.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tips:&lt;/strong&gt; The code is available in &lt;a href="https://gist.github.com/Kmaschta/0920c6a7781cdf15c37a51b370e4fb66" rel="noopener noreferrer"&gt;this gist&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First, install the dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;deployment-example &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="nv"&gt;$_&lt;/span&gt; &lt;span class="c"&gt;# cd into the directory just created&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;npm init &lt;span class="nt"&gt;--yes&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--save&lt;/span&gt; express axios reify
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the server, proxying all requests to the &lt;code&gt;http://perdu.com&lt;/code&gt; backend, after adding a random header:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// server.js&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;uuid&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;uuid&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;axios&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NODE_PORT&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;baseURL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PROXY_HOST&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http://perdu.com/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;baseURL&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;requestOptions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;X-Random&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;v4&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;requestOptions&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Proxy is running on port &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;. Press Ctrl+C to stop.`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Tips:&lt;/strong&gt; Did you notice how I used environment variables via &lt;code&gt;process.env&lt;/code&gt;? It's central to my point, keep it in mind for later.&lt;/p&gt;

&lt;p&gt;A small makefile to run the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight make"&gt;&lt;code&gt;&lt;span class="nl"&gt;start&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="c"&gt;# reify is a small lib to fully support import/export without having to install the babel suite&lt;/span&gt;
    node &lt;span class="nt"&gt;--require&lt;/span&gt; reify server.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The server is runnable on local environment, it's all good, "it works on my machine™":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;make start
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Proxy is running on port 3000. Press Ctrl+C to stop.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Building an Artifact
&lt;/h3&gt;

&lt;p&gt;Now let's ship this code to a staging environment, in order to test it in somewhat real conditions.&lt;/p&gt;

&lt;p&gt;At this point, it's a good idea to freeze a version of the code (with a Git tag or a GitHub release for instance).&lt;/p&gt;

&lt;p&gt;The simplest way to prepare the deployment is to &lt;strong&gt;build&lt;/strong&gt; a zip file with the source code and all its dependencies. I'll add the following target to the &lt;code&gt;makefile&lt;/code&gt;, creating a zip with the identifier of the latest commit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight make"&gt;&lt;code&gt;&lt;span class="nl"&gt;build&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; build
    zip &lt;span class="s1"&gt;'build/&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s1"&gt;shell git log -1 --pretty="%h"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s1"&gt;.zip'&lt;/span&gt; Makefile package.json &lt;span class="nt"&gt;-R&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="s1"&gt;'*.js'&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="s1"&gt;'*.json'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And it works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;make build
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls &lt;/span&gt;build/
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; 4dd370f.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The build step is simplified here, but on real projects it can imply a bundler, a transpiler, a minifier, and so on.&lt;br&gt;
All these lengthy tasks should be done at the build step.&lt;/p&gt;

&lt;p&gt;The resulting zip file is what we can call an &lt;strong&gt;artifact&lt;/strong&gt;. It can be deployed on an external server, or be stored in a S3 bucket for later usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tips:&lt;/strong&gt; Once you find the build process that fits you, automate it! Usually, a Continuous Integration / Continuous Delivery (CI/CD) system like Travis or Jenkins runs the tests, and if they pass, build the artifact in order to store it.&lt;/p&gt;
&lt;h3&gt;
  
  
  Deploying the Artifact
&lt;/h3&gt;

&lt;p&gt;To deploy the artifact, just copy this file on the server, extract it, and run the code. I automate it again as a &lt;code&gt;makefile&lt;/code&gt; target:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight make"&gt;&lt;code&gt;&lt;span class="nv"&gt;TAG&lt;/span&gt; &lt;span class="o"&gt;?=&lt;/span&gt;
&lt;span class="nv"&gt;SERVER&lt;/span&gt; &lt;span class="o"&gt;?=&lt;/span&gt; proxy-staging

&lt;span class="nl"&gt;deploy&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    scp build/&lt;span class="p"&gt;$(&lt;/span&gt;TAG&lt;span class="p"&gt;)&lt;/span&gt;.zip &lt;span class="p"&gt;$(&lt;/span&gt;SERVER&lt;span class="p"&gt;)&lt;/span&gt;:/data/www/deployment-example/
    ssh &lt;span class="p"&gt;$(&lt;/span&gt;SERVER&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;" &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
        cd /data/www/deployment-example/ &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
        unzip &lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;TAG&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;.zip -d &lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;TAG&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/ &amp;amp;&amp;amp; rm &lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;TAG&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;.zip &lt;/span&gt;&lt;span class="se"&gt;\ &lt;/span&gt;&lt;span class="s2"&gt;     # unzip the code in a folder&lt;/span&gt;
        &lt;span class="s2"&gt;cd current/ &amp;amp;&amp;amp; make stop &lt;/span&gt;&lt;span class="se"&gt;\ &lt;/span&gt;&lt;span class="s2"&gt;                         # stop the current server&lt;/span&gt;
        &lt;span class="s2"&gt;cd ../ &amp;amp;&amp;amp; rm current/ &amp;amp;&amp;amp; ln -s &lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;TAG&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/ current/ &lt;/span&gt;&lt;span class="se"&gt;\ &lt;/span&gt;&lt;span class="s2"&gt;  # move the symbolic link to the new version&lt;/span&gt;
        &lt;span class="s2"&gt;cd current/ &amp;amp;&amp;amp; make start &lt;/span&gt;&lt;span class="se"&gt;\ &lt;/span&gt;&lt;span class="s2"&gt;                        # restart the server&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'Deployed &lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s1"&gt;TAG&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s1"&gt; version to &lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s1"&gt;SERVER&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I use environment variables to specify the tag I want to deploy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ TAG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4dd370f make deploy
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Deployed 4dd370f version to proxy-staging
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the deployment is actually very fast, because I don't need to &lt;em&gt;build&lt;/em&gt; in the target environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tips:&lt;/strong&gt; That means that if the build contains binaries, they must be compiled for the target environment. To simplify, it means you should &lt;em&gt;develop&lt;/em&gt; your code on the same system as you &lt;em&gt;run&lt;/em&gt; it. It's simpler that it sounds once you use Vagrant or Docker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tips:&lt;/strong&gt; I used the &lt;code&gt;ssh&lt;/code&gt; command without any credential arguments. You don't want those credentials in a &lt;code&gt;makefile&lt;/code&gt;, so I advise to use your local &lt;code&gt;~/.ssh/config&lt;/code&gt; to save them. This way, you can securely share them with your co-workers, and keep these credentials outside of the repository - while having the deployment instructions in the the Makefile. Here is an exampe SSH configuration for my &lt;code&gt;proxy-staging&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;Host&lt;/span&gt; &lt;span class="n"&gt;proxy&lt;/span&gt;-&lt;span class="n"&gt;staging&lt;/span&gt;
    &lt;span class="n"&gt;Hostname&lt;/span&gt; &lt;span class="n"&gt;staging&lt;/span&gt;.&lt;span class="n"&gt;domain&lt;/span&gt;.&lt;span class="n"&gt;me&lt;/span&gt;
    &lt;span class="n"&gt;User&lt;/span&gt; &lt;span class="n"&gt;ubuntu&lt;/span&gt;
    &lt;span class="n"&gt;IdentityFile&lt;/span&gt; ~/.&lt;span class="n"&gt;ssh&lt;/span&gt;/&lt;span class="n"&gt;keys&lt;/span&gt;/&lt;span class="n"&gt;staging&lt;/span&gt;.&lt;span class="n"&gt;pem&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploying to Several Environments
&lt;/h3&gt;

&lt;p&gt;The server can now run in the &lt;code&gt;proxy-staging&lt;/code&gt; server, with the default configuration. What if I want to deploy the same build to a &lt;code&gt;proxy-production&lt;/code&gt; server, but with a different port?&lt;/p&gt;

&lt;p&gt;All I need is to be sure that wherever the code runs, the &lt;code&gt;NODE_PORT&lt;/code&gt; environment variable is set to the correct value.&lt;/p&gt;

&lt;p&gt;There are many ways to achieve this &lt;strong&gt;configuration&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The simplest one is to directly write the values of the environment variables in a &lt;code&gt;~/.ssh/environment&lt;/code&gt; file in each server. That way, I don't need to remember how or when to retrieve the configuration: it's loaded automatically each time I log into the machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ssh production-server &lt;span class="s2"&gt;"echo 'NODE_PORT=8000' &amp;gt;&amp;gt; ~/.ssh/environment"&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ssh production-server &lt;span class="s2"&gt;"echo 'PROXY_HOST=https://host.to.proxy' &amp;gt;&amp;gt; ~/.ssh/environment"&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ssh production-server
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;env
&lt;/span&gt;&lt;span class="nv"&gt;NODE_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8000
&lt;span class="nv"&gt;PROXY_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://host.to.proxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now I can deploy to &lt;code&gt;proxy-production&lt;/code&gt; the same artifact that I've used for &lt;code&gt;proxy-staging&lt;/code&gt;, no need to rebuild.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ TAG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4dd370f &lt;span class="nv"&gt;SERVER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;proxy-production make deploy
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Deployed 4dd370f version to proxy-production
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tips:&lt;/strong&gt; The &lt;strong&gt;rollback&lt;/strong&gt; is as simple as a deployment of the previous artifact.&lt;/p&gt;

&lt;p&gt;At this point, it is easy to automate the process: let the CI build an artifact each time a PR is merged, or on every push to master (Travis, Jenkins, and every other CI allow to implement a build phase). Then, store this build somewhere with a specific tag, such as the commit hash or the release tag.&lt;/p&gt;

&lt;p&gt;When somebody wants to deploy, they can run a script on the server that downloads the artifact, configures it thanks to the environment variables, and runs it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tips:&lt;/strong&gt; If you don't want to write your environment variables on every production server you have, you can use a &lt;strong&gt;configuration manager&lt;/strong&gt; like &lt;a alt="comfygure" href="https://github.com/marmelab/comfygure" rel="noopener noreferrer"&gt;comfygure&lt;/a&gt;. Disclaimer: We wrote it!&lt;/p&gt;

&lt;h2&gt;
  
  
  When You Should Use Artifact-Based Deployment
&lt;/h2&gt;

&lt;p&gt;Artifact-based deployment comes with many advantages. It is quick to deploy, instant to rollback, the backups are available and ready to run.&lt;/p&gt;

&lt;p&gt;It allows to run the exact same code on every environment. In fact, with artifacts, the deployment becomes a &lt;em&gt;configuration&lt;/em&gt; issue. If a bunch of code works on staging, there is no reason that it should fail on production, unless there is a mistake in the configuration - or the environments aren't the same. For that reason, it's a good idea to invest in a staging environment that strictly equals the production environment. &lt;/p&gt;

&lt;p&gt;Feature flagging is also easier with such a deployment process: just check the environment variables. And last but not least, the deployment can be automated to such an extent that it can be done by a non-technical person (like a product owner). We do it, and our customers love it.&lt;/p&gt;

&lt;p&gt;But automating such a process comes with a substantial cost. It takes time to install and maintain, because when the build process involves more than just zipping a few files, you need to run it with a &lt;em&gt;bundler&lt;/em&gt; like Webpack. Also, consider the extra disk space necessary to store the artifacts and the backups. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tips:&lt;/strong&gt; Use environment variables, do not check the environment. Try to find instances of &lt;code&gt;if (env !== 'production')&lt;/code&gt; in your code, and replace them with a significant environment variable, like &lt;code&gt;if (process.env.LOGGING_LEVEL === 'verbose')&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;No need of such a deployment process for a proof-of-concept, an early project, or a solo developer. All the advantages come with a team working on a mature project.&lt;/p&gt;

&lt;p&gt;Ask yourself how many time you spend on deployment, and take a look at this chart!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0lkfya38rrykgj3qmos.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0lkfya38rrykgj3qmos.png" alt="xkcd: Is it worth the time?"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;My project is an SPA / PWA. I must configure my bundle at the build phase!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You don't have to. There is many ways to configure a Single Page Application later in the process. For instance, load a &lt;code&gt;config.json&lt;/code&gt; at runtime from a different location than the other static assets. Or, write a &lt;code&gt;window.__CONFIG__&lt;/code&gt; with server side rendering.&lt;/p&gt;

&lt;p&gt;Don't be afraid to display the frontend config, it'll be easier to debug. If you have sensible informations in your frontend configuration hidden in your minified and cryptic webpack build, you're already doing it wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What about database migrations?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Migrations don't have to run when the application deploys. They should have their own pipeline, that can be triggered at the appropriate moment. This way, you can handle each deployment on its own, and rollback any of them independently. It clearly implies to do backups a lot, and to have revertable migrations.&lt;/p&gt;

&lt;p&gt;In case of big or painful migration, don't hesitate to do it step by step. For example, how to move a table: first create the new table, then copy the data, and finally delete the resulting table in three different migrations. The application can be deployed between the copy and the table deletion. If something goes wrong, the application can be reverted quickly without touching the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Once this process is correctly installed, deployments become quicker and safer. It allows teams and product owners to be more confident about deployments.&lt;/p&gt;

&lt;p&gt;Artifact-based deployment is a powerful technique that is truly worth considering when a project gets close to its maturity.&lt;br&gt;
It matches perfectly with agile methodologies like SCRUM, even more with Kanban, and it's a prerequisite to continuous delivery.&lt;/p&gt;

&lt;p&gt;To go further with deployment techniques, I recommend huge article of Nick Craver: &lt;a href="https://nickcraver.com/blog/2016/05/03/stack-overflow-how-we-do-deployment-2016-edition/" rel="noopener noreferrer"&gt;Stack Overflow: How We Do Deployment - 2016 Edition&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>deployment</category>
    </item>
  </channel>
</rss>
