<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: You Know, For Devs</title>
    <description>The latest articles on DEV Community by You Know, For Devs (@youknowfordevs).</description>
    <link>https://dev.to/youknowfordevs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/youknowfordevs"/>
    <language>en</language>
    <item>
      <title>What I Learnt From Reviewing 22 CVs</title>
      <dc:creator>You Know, For Devs</dc:creator>
      <pubDate>Sat, 04 Jul 2020 14:08:00 +0000</pubDate>
      <link>https://dev.to/youknowfordevs/what-i-learnt-from-reviewing-22-cvs-3dl8</link>
      <guid>https://dev.to/youknowfordevs/what-i-learnt-from-reviewing-22-cvs-3dl8</guid>
      <description>&lt;p&gt;I was recently asked to look over some CVs. It’s a while since I’ve done this, so I thought that noting down my immediate reactions as I went through the CVs might help anyone having to circulate their own CV at the moment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y9cepHp0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/christopher-sardegna-iRyGmA_no2Q-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y9cepHp0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/christopher-sardegna-iRyGmA_no2Q-unsplash.jpg" alt="Photo by Christopher Sardegna on Unsplash"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’m not for a moment saying I’m right or wrong in the following views, and this is definitely not one of those posts about ‘how to write your CV to land your dream job’. I’m merely laying out the responses I had and more importantly what I felt these reponses told me about the person behind the CV. My ‘gut reaction’ may well be similar to others’ and if they are, it might be worth bearing them in mind when you share your CV.&lt;/p&gt;

&lt;h2&gt;
  
  
  Looks Count More Than I Thought
&lt;/h2&gt;

&lt;p&gt;First, it wasn’t until I reached the ninth CV that I realised how much a CV’s appearance makes a difference. When viewing numbers one to eight I hadn’t been thinking ‘these look terrible’, but when I reached the ninth, a part of my brain said: ‘Finally! A CV that looks good.’&lt;/p&gt;

&lt;p&gt;I think if the role was for a C++ programmer then I probably would have simply smiled to myself, ignored my response, and moved on. That’s not to say that appearance doesn’t matter–or that C++ programmers don’t worry about their appearance–but if I was trying to decide between two C++ programmers it wouldn’t be the look of their CV that was the deciding factor.&lt;/p&gt;

&lt;p&gt;But the role for which I was reviewing this batch of CVs was a React developer and there are lots of very talented React developers out there. And when lots of people have worked with GraphQL and Docker and cloud services and D3 then it may well come down to whether or not you have made an effort to convey that you like things to ‘look nice’.&lt;/p&gt;

&lt;p&gt;It’s worth saying too that the CVs that looked good didn’t go overboard; all they did was have a pleasing font, perhaps two columns instead of one for the layout, maybe a couple of icons and some colour, and probably didn’t span more than a couple of pages. But that was enough to convince me that these were people who would ensure that a bad-looking product didn’t get deployed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Attention to Detail
&lt;/h2&gt;

&lt;p&gt;Probably in the same vein, typos really jump out at you when you are looking at a lot of CVs. Again I’m not saying I’m right or wrong on whether this should be an important criteria, but the fact that my subconcscious reaction was to be turned off if I saw a spelling mistake–or a technology name that had been capitalised incorrectly–shows that somewhere in our psyches this stuff counts for something.&lt;/p&gt;

&lt;p&gt;As with the previous point about looks, I’d probably see this as more relevant for front-end people.&lt;/p&gt;

&lt;p&gt;And it might be worth saying that it’s usually not difficult to differentiate between those who have just not bothered to check their CV and those that have difficulty with spelling. I’ve worked with people in the latter camp, both senior managers and junior programmers, and it’s not a big deal–you can work with it and it would not stop me from hiring someone.&lt;/p&gt;

&lt;p&gt;But to keep with the theme of my ‘initial reaction’, I can’t help that when I see someone write ‘Graphql’ I’m going to doubt whether the applicant really has that much experience with ‘GraphQL’; maybe it’s worth just double-checking the website of the technologies you use one last time, before sending off that CV.&lt;/p&gt;

&lt;h2&gt;
  
  
  Approaches v. Technologies
&lt;/h2&gt;

&lt;p&gt;One other thing that I hadn’t expected was that I was drawn to mark people more highly if they gave some indication of the &lt;em&gt;approach&lt;/em&gt; they took to their coding and development. For example, a few people mentioned having experience with TDD, which I thought was good. That doesn’t mean I’d rule out someone who didn’t have TDD experience, but it’s an approach that people need to be aware of.&lt;/p&gt;

&lt;p&gt;However, more importantly than TDD, only a handful indicated that they liked to get involved in code reviews and saw them as a good way to share knowledge and build a consistent and high quality codebase. Again, I would not rule out someone who didn’t mention code reviews, mentoring, and so on, provided they were relatively inexperienced; but I probably &lt;em&gt;would&lt;/em&gt; rule out someone who clearly has experience, but hasn’t thought to mention these things.&lt;/p&gt;

&lt;h3&gt;
  
  
  Approach From Technologies
&lt;/h3&gt;

&lt;p&gt;What I was also surprised by was how the list of technologies on a person’s CV led to me forming an impression of the kind of coder that person was. If I saw GraphQL in the mix then I found myself thinking ‘modern tech stack’. It may be unfair, especially since GraphQL wasn’t mentioned in the job description that was circulated; but after reviewing all of the CVs I found that it was a pretty accurate indicator of the types of projects someone had been involved in.&lt;/p&gt;

&lt;p&gt;There were other technologies that were similarly indicative, such as mention of ‘microservices’, or use of MongoDB, Elasticsearch, Redis, RabbitMQ, and so on. But there were very few CVs where these technologies appeared without GraphQL.&lt;/p&gt;

&lt;p&gt;There is probably no general rule here, since the technologies that differentiate will be different in each cohort of CVs; if every CV included GraphQL then it may have been the presence of something like Docker and/or Kubernetes that told you something about the kind of people you were looking at.&lt;/p&gt;

&lt;h2&gt;
  
  
  Public Profile
&lt;/h2&gt;

&lt;p&gt;This last point may also be controversial, but I’m going to describe it so that you can do whatever you like with the information; if someone was not on LinkedIn or didn’t have a public Git account, I found myself thinking ‘well, what exactly do they do?’&lt;/p&gt;

&lt;p&gt;From this batch of 22 CVs, only one had answered any questions on Stack Overflow, and only one–a different person–had done any talks at Meetups. A couple had GitHub accounts, a couple were on LinkedIn and a couple had blogs.&lt;/p&gt;

&lt;p&gt;This is no doubt unfair, so again, blame my subconscious. But I can’t help think that if I’m going to work with some people day after day, I want to be with people that are sharing links to things they’ve read, people who are trying out new technologies and languages, people who have weekend projects, or who are contributing bug fixes back to open source projects.&lt;/p&gt;

&lt;p&gt;And although I’m not particularly a big fan of LinkedIn, I always quite like it when I see people who are at the early stages of their career making connections with everyone they met at last night’s Meetup.&lt;/p&gt;

&lt;p&gt;None of this is to ignore the current discussion about work/life balance–to demand that people do their own education in their own time, spend all waking hours coding, attend Meetups every night of the week. But it is to recognise that we are in a fast-moving industry, and if you want your CV to be picked from amongst that batch of 22, then it’s going to be important to think about how you come across.&lt;/p&gt;

&lt;h2&gt;
  
  
  Qualifications
&lt;/h2&gt;

&lt;p&gt;One final thing has just occurred to me…I didn’t consider the qualifications of &lt;em&gt;anyone&lt;/em&gt;, which I imagine is down to the role that the CVs were for. You’d certainly be interested in degrees and PhDs if you were hiring a data scientist or someone to write an operating system; in these spaces theory is a major element. But for fast-moving technologies like front-end development, you can probably tell everything about a person from their experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I haven’t had to look at CVs for quite a while, so it was interesting to examine my own responses to reviewing this batch. What became clear is that I quickly ‘merged’ the CV and the person in my mind. If there were mistakes in the CV I couldn’t help but imagine someone who lacked attention to detail. If there was a little bit of colour in the CV or the person had spoken at a Meetup then I couldn’t help but think that this was a person who was prepared to go the extra mile. And if the person had a public Git account then my spontaneous reaction was that this was someone who wanted to keep learning, and really enjoyed the work they do.&lt;/p&gt;

&lt;p&gt;I may be wrong in my judgements or harsh in my conclusions, but if you’re about to send your CV out, at least you now have a couple more insights that might help you decide how your CV is representing you.&lt;/p&gt;

</description>
      <category>cv</category>
      <category>react</category>
      <category>jobhunting</category>
    </item>
    <item>
      <title>Let TypeScript Inference Take The Strain</title>
      <dc:creator>You Know, For Devs</dc:creator>
      <pubDate>Sat, 27 Jun 2020 15:13:00 +0000</pubDate>
      <link>https://dev.to/youknowfordevs/let-typescript-inference-take-the-strain-4d56</link>
      <guid>https://dev.to/youknowfordevs/let-typescript-inference-take-the-strain-4d56</guid>
      <description>&lt;p&gt;I was doing a code review recently and saw something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mapEventName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;eventName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;switch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;eventName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;UPDATE_ABC&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ABC Updated&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;UPDATE_DEF&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;DEF Updated&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;eventName&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Putting aside whether this should use a &lt;code&gt;Map&lt;/code&gt; or some other technique instead of &lt;code&gt;switch&lt;/code&gt;, I was interested specifically in the type information that was provided for the return value.&lt;/p&gt;

&lt;p&gt;Something I see a lot when people adopt TypeScript is to type &lt;em&gt;everything&lt;/em&gt;. I think it’s a good idea to add types to structures and interfaces that are passing between layers of our code. But to add extra code to indicate that a function returns a string, &lt;em&gt;when it so obviously returns a string&lt;/em&gt; is overkill.&lt;/p&gt;

&lt;p&gt;So what’s the alternative?&lt;/p&gt;

&lt;p&gt;Well in this example there is no need for the return type to be specified. TypeScript has impressive powers of inference and in this function it will have no trouble deducing that the type of the return value will be a string literal. If we look at the function there are three places where a return value is specified; two of them stipulate a string constant to return, and the third says to return the parameter that was passed in–which is itself defined as a string.&lt;/p&gt;

&lt;p&gt;Together these conditions combine such that there is &lt;em&gt;no possibility&lt;/em&gt; that this function will return anything other than a string. TypeScript will therefore act accordingly. To verify this, take a look at the IntelliSense provided by your favourite editor. Here is the IntelliSense with the return type specified explicitly:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2D13xivB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/function-intellisense-with-return-type.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2D13xivB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/function-intellisense-with-return-type.png" alt="Function intellisense with return type"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and here is exactly the same IntelliSense when the return type is omitted:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Em1VY182--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/function-intellisense-without-return-type.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Em1VY182--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/function-intellisense-without-return-type.png" alt="Function intellisense without return type"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if we want to reassure ourselves that TypeScript is still doing the same checking, let’s try to use the function in a position where a number is required:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mapEventName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;UPDATE_ABC&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Put this in a TypeScript-enabled editor and you’ll see there is a type error, regardless of the fact that the return type is not explicitly specified:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K5Mk-DV4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/function-intellisense-incompatible-type.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K5Mk-DV4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/function-intellisense-incompatible-type.png" alt="Function intellisense incompatible type"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You might ask whether missing off one declaration gains us much, but I would suggest it’s a mindset. Rather than approaching TypeScript as this rigid straightjacket that must be wrapped around all of our code, we instead see it as a flexible combination of two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a supercharged linter that can infer types from our code and then check that we’re not doing anything dumb, and;&lt;/li&gt;
&lt;li&gt;a type language that we can add as we need to, to help to resolve ambiguities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gives us the option of a much ‘lighter touch’ approach to TypeScript.&lt;/p&gt;

&lt;p&gt;For more on TypeScript’s type inference see &lt;a href="https://www.typescriptlang.org/docs/handbook/type-inference.html"&gt;Type Inference&lt;/a&gt;. And for a particularly clever scenario where types can be inferred from &lt;code&gt;assert()&lt;/code&gt; statements, see &lt;a href="https://www.typescriptlang.org/docs/handbook/release-notes/typescript-3-7.html#assertion-functions"&gt;‘asserts condition’&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Why You Need a Pipeline Runner For Your Node Streams</title>
      <dc:creator>You Know, For Devs</dc:creator>
      <pubDate>Sat, 20 Apr 2019 09:12:00 +0000</pubDate>
      <link>https://dev.to/youknowfordevs/why-you-need-a-pipeline-runner-for-your-node-streams-5bpm</link>
      <guid>https://dev.to/youknowfordevs/why-you-need-a-pipeline-runner-for-your-node-streams-5bpm</guid>
      <description>&lt;h1&gt;
  
  
  Basic Node Streams
&lt;/h1&gt;

&lt;p&gt;A basic Node streams application would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;zlib&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;zlib&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createReadStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;inputPath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;zlib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createGzip&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createWriteStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;outputPath&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The key thing is that each stream provides a &lt;code&gt;pipe()&lt;/code&gt; method, which is used to bolt on additional streams.&lt;/p&gt;

&lt;p&gt;However, although this is correct, it doesn’t lend itself to easy manipulation; in particular we have to start with one stream and bolt other streams on to it, which throws out the symmetry and makes the whole process difficult to reason about.&lt;/p&gt;

&lt;h1&gt;
  
  
  A Better Model: A Pipeline of Streams
&lt;/h1&gt;

&lt;p&gt;A better metaphor is of a pipeline that comprises multiple steps, each of which is a stream. In this model there is nothing ‘special’ about the first or last streams. Since version 10, Node has provided the module function &lt;a href="https://nodejs.org/api/stream.html#stream_stream_pipeline_streams_callback"&gt;stream.pipeline()&lt;/a&gt; that does exactly this. This function can be promisified, which makes the code even easier to read. Our previous pipeline would now look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;stream&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;util&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;util&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pipeline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;util&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;promisify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;zlib&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;zlib&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createReadStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;inputPath&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="nx"&gt;zlib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createGzip&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createWriteStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;outputPath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now it’s much clearer that our streams are ‘steps’ in a pipeline, and they no longer play the role of &lt;em&gt;driving&lt;/em&gt; the pipeline.&lt;/p&gt;

&lt;p&gt;We’ll use this pattern as our canonical pipeline because it is much easier to augment.&lt;/p&gt;

&lt;h1&gt;
  
  
  Separating Pipeline Creation From Running
&lt;/h1&gt;

&lt;p&gt;Now that we have the idea of a pipeline as our primary concept, we can start to play with it. In particular we’ll find it extremely useful if we can separate the step of &lt;em&gt;creating&lt;/em&gt; a pipeline from the step of &lt;em&gt;running&lt;/em&gt; it.&lt;/p&gt;

&lt;p&gt;We could then, for example, create a pipeline and check it for errors, even before it is run; we could check that the first step is a readable stream and the last step a writable one, for example, or ensure that all necessary resources such as databases or search engines are available before the pipeline starts.&lt;/p&gt;

&lt;p&gt;We could also insert extra steps that wouldn’t need to be specified explicitly in the pipeline, such as pushing and popping items to and from a queue between each step, or adding handlers for step-specific errors.&lt;/p&gt;

&lt;p&gt;Another benefit of separating pipeline creation from execution is that we could optimise the pipeline before it’s run, perhaps by merging or reordering steps. We might even create the pipeline on one machine, and send the pipeline to be run on another machine…or multiple machines if we have a way to split the work.&lt;/p&gt;

&lt;p&gt;And not only could we do interesting things with the full pipeline, but we could even manipulate &lt;em&gt;parts&lt;/em&gt; of the pipeline; we could run half of the pipeline, then materialise the results, and later run the rest of the pipeline using the saved results (incidentally allowing a pipeline to be upgraded, even whilst running), or we could execute individual pipeline steps as serverless functions, with all of the advantages that that would bring.&lt;/p&gt;

&lt;p&gt;To get ourselves to a situation where these kinds of things are possible our next step is to separate the pipeline itself from its runner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Runner
&lt;/h2&gt;

&lt;p&gt;The basic form of a runner is to first create some steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;steps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createReadStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;inputPath&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="nx"&gt;zlib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createGzip&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createWriteStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;outputPath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;and then to run those steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="nx"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If we wrap this functionality in a class then we have a foundation on which to add the other features we mentioned earlier. So let’s create a class that allows us to &lt;em&gt;register&lt;/em&gt; a set of steps with one method, and to &lt;em&gt;run&lt;/em&gt; those steps with another method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nx"&gt;Runner&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;stream&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;util&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;util&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;util&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;promisify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;steps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;steps&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we can use our runner like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;runner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;Runner&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;steps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createReadStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;inputPath&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="nx"&gt;zlib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createGzip&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createWriteStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;outputPath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;runner&lt;/span&gt;
&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It’s exactly the same as our previous pipeline but with this structure we’ve got lots of points at which we can insert the new kinds of functionality that we mentioned earlier, by inheriting from the base class.&lt;/p&gt;

&lt;p&gt;For example, we could add more features by way of the &lt;code&gt;register()&lt;/code&gt; method, such as pipeline validation, step optimisation, insertion of additional steps, and so on. (And manipulation of the pipeline is as simple as manipulating an array.)&lt;/p&gt;

&lt;p&gt;And we could modify the &lt;code&gt;run()&lt;/code&gt; method to send the steps on to some other server, or launch more workers, sharing the work between them.&lt;/p&gt;

&lt;p&gt;In future posts we’ll start to flesh out how to achieve some of these more advanced features by building on this class.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Node streams are incredibly powerful, and are a fundamental building-block of efficient and fast data-processing and transformation applications. However, the usual model of an input stream that pipes its output to another stream is quite difficult to manipulate and reason about. Node supports a better model by way of the &lt;code&gt;pipeline()&lt;/code&gt; function, and this post shows how it can be used to construct pipelines that comprise multiple streams, and then to separately execute those pipelines.&lt;/p&gt;

</description>
      <category>node</category>
      <category>streams</category>
    </item>
    <item>
      <title>Getting Control Of Your .dockerignore Files</title>
      <dc:creator>You Know, For Devs</dc:creator>
      <pubDate>Fri, 07 Dec 2018 09:04:01 +0000</pubDate>
      <link>https://dev.to/youknowfordevs/getting-control-of-your-dockerignore-files-50m4</link>
      <guid>https://dev.to/youknowfordevs/getting-control-of-your-dockerignore-files-50m4</guid>
      <description>&lt;p&gt;Every now and then I’ll notice a file inside a Docker container that really shouldn’t be there. Thankfully it’s never been a &lt;code&gt;.env&lt;/code&gt; file or an SSH key, but even so, any unused file or directory takes up space in the image and makes that image slower to build and pass around. Best practice suggests using the oft-neglected &lt;code&gt;.dockerignore&lt;/code&gt; file to keep our secrets secret and make sure our Docker images are as lean as possible. But this file then usually becomes a maintenance nightmare as the list of exclusions grows.&lt;/p&gt;

&lt;p&gt;This post shows how to control &lt;code&gt;.dockerignore&lt;/code&gt; by indicating which files we want to &lt;em&gt;include&lt;/em&gt; rather than those we want to &lt;em&gt;exclude&lt;/em&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Goal
&lt;/h1&gt;

&lt;p&gt;For a while now I’ve been trying to come up with an easy way to exclude unecessary files from my Docker images. Obviously I want to keep secrets out, and I want to prevent any unused directories from being transferred to the Docker daemon when building–they slow down the build as well as increase the size of the resulting images.&lt;/p&gt;

&lt;p&gt;But I also wanted to find a way to exclude files that relate to the build process itself, like &lt;code&gt;Dockerfile&lt;/code&gt; and &lt;code&gt;docker-compose.yml&lt;/code&gt;; it’s unnecessary clutter if they get transferred to an image and it would be better if they weren’t included at all. This is even more important should you want to share an image through public channels like Docker Hub while keeping the source that builds that image internal to your organisation; in this case there may be private information in these build files and you certainly won’t want them to be shared.&lt;/p&gt;

&lt;p&gt;One last requirement is that when we make a mistake with our ignore rules we want to know about it. The way &lt;code&gt;.dockerignore&lt;/code&gt; is normally used, if you mistakenly put something like ‘git’ into your file as a pattern, instead of ‘.git’, you won’t know anything about it unless you spot that a lot of data is being transferred to your build.&lt;/p&gt;

&lt;p&gt;The rest of this post looks at how we might solve these problems.&lt;/p&gt;

&lt;h1&gt;
  
  
  Approach #1: Build From A Single Directory
&lt;/h1&gt;

&lt;p&gt;Most of the approaches I have tried in the last year or so have revolved around different ways of organising the project directories. There are lots of ways a project could be organised, but in general, the approach is to try to keep the source, tests, documentation, generated files, and anything else that is part of the project, separate from each other.&lt;/p&gt;

&lt;p&gt;As part of this separation, we also want to ensure that any directories that we don’t want to appear inside the Docker image get set as &lt;em&gt;peers&lt;/em&gt; of the main directory that we want to include. This simply means that rather than having &lt;code&gt;app &amp;gt; docs&lt;/code&gt; or &lt;code&gt;app &amp;gt; test&lt;/code&gt; as our directories, we would instead have &lt;code&gt;app&lt;/code&gt;, &lt;code&gt;docs&lt;/code&gt; and &lt;code&gt;test&lt;/code&gt; as top-level directories sitting alongside each other.&lt;/p&gt;

&lt;p&gt;By having one directory that contains all of the files that will be needed in a build, and everything else kept safely apart in other directories, we at least start to lower the risk of mistakenly copying something important or unnecessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting the Context
&lt;/h2&gt;

&lt;p&gt;Once our directories are organised then it’s a simple matter to set the Docker build context to refer to the single directory that should be included in the Docker image. By setting the context to a sub-directory we have ensured that all other directories are excluded, whether they are &lt;code&gt;.git&lt;/code&gt;, &lt;code&gt;node_modules&lt;/code&gt;, documentation and tests, and so on.&lt;/p&gt;

&lt;p&gt;However, the weakness with this approach is that since the build context must contain &lt;em&gt;everything&lt;/em&gt; that Docker will need to build the image, that, unfortunately, means we need to place &lt;code&gt;Dockerfile&lt;/code&gt; in our source directory too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ignoring With &lt;code&gt;.dockerignore&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;It’s easy enough to stop the &lt;code&gt;Dockerfile&lt;/code&gt; from being copied into the image by creating a &lt;code&gt;.dockerignore&lt;/code&gt; file and then adding ‘Dockerfile’ to it. The &lt;code&gt;.dockerignore&lt;/code&gt; file is an ‘ignore file’ which tells the build process which files to leave out when transferring the context to the Docker daemon. For this situation it could be as simple as this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# In .dockerignore
Dockerfile
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Don’t worry that this could prevent the whole build process from working. In the case of &lt;code&gt;Dockerfile&lt;/code&gt;, Docker will still transfer it to the daemon to guide the build process, regardless of whether we exclude it or not. But by adding it to our ignore file we stop it going any further through the process; i.e., it won’t be available to be copied into the image.&lt;/p&gt;

&lt;p&gt;Of course, if we add a &lt;code&gt;.dockerignore&lt;/code&gt; file to our source directory then that will be available to the Docker daemon, so we now need to ignore that as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# In .dockerignore
.dockerignore
Dockerfile
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Problems With Approach #1
&lt;/h2&gt;

&lt;p&gt;Although this is a nice simple solution, there’s still nothing to say that we might not accidentally copy something into the image that we didn’t want to. For example, when developing code on our laptop we’ll probably have a &lt;code&gt;.env&lt;/code&gt; file full of database passwords and server names in the source directory. If we forget to exclude the &lt;code&gt;.env&lt;/code&gt; file as well then it will most likely be copied into our image.&lt;/p&gt;

&lt;p&gt;I also prefer not to leave the Docker-related files in the same directory as the source code since it is mixing layers–the source file of our application is now sat alongside the instructions to &lt;em&gt;build&lt;/em&gt; our application. It’s all a bit ‘meta’.&lt;/p&gt;

&lt;h1&gt;
  
  
  Approach #2: A Richer &lt;code&gt;.dockerignore&lt;/code&gt; File
&lt;/h1&gt;

&lt;p&gt;So the next step would be to keep the build files out of the main directory, moving them to the root of the project. If we do that then we must make the root of the project the ‘build context’, and that means we’re now exposing all of our files to Docker. To exclude the peer directories we mentioned earlier we would modify the ignore file further, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# In .dockerignore
.dockerignore
Dockerfile
dist
docs
test
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If the code we want to put in our image is in the &lt;code&gt;app&lt;/code&gt; directory then this &lt;code&gt;.dockerignore&lt;/code&gt; file will allow that directory to be copied, but will exclude the &lt;code&gt;dist&lt;/code&gt;, &lt;code&gt;docs&lt;/code&gt; and &lt;code&gt;test&lt;/code&gt; directories.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problems With Approach #2
&lt;/h2&gt;

&lt;p&gt;But whilst we’re excluding tests and documentation, we should also be excluding our &lt;code&gt;.git&lt;/code&gt; directory (to keep the size down) and other build files, like &lt;code&gt;docker-compose.yml&lt;/code&gt; and &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; (to keep things private). Now our ignore file looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# In .dockerignore
.git
.gitlab-ci.yml
.dockerignore
dist
Dockerfile
docker-compose.yml
docs
test
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This approach of continually growing the ignore file works and you’ll find many blog posts that describe exactly what we’re doing here. But it soon gets very, very tedious to keep excluding all of the files and directories that are &lt;em&gt;not&lt;/em&gt; needed in the image, and after a while, you’ll find yourself turning a blind eye to any ‘harmless’ files that are exposed or that get inadvertently copied in.&lt;/p&gt;

&lt;p&gt;But that means we’re back to square one again because we can &lt;em&gt;never really be sure&lt;/em&gt; that we haven’t copied in something that shouldn’t be there, or that the image we’ve created couldn’t be smaller.&lt;/p&gt;

&lt;p&gt;Luckily it turns out that there is a much easier way to maintain these exclusions, and it’s pretty much foolproof.&lt;/p&gt;

&lt;h1&gt;
  
  
  Approach #3: Define &lt;em&gt;Inclusions&lt;/em&gt;, Not &lt;em&gt;Exclusions&lt;/em&gt;
&lt;/h1&gt;

&lt;p&gt;As we all know, the &lt;code&gt;.dockerignore&lt;/code&gt; file contains a list of file patterns to exclude. But it doesn’t &lt;em&gt;just&lt;/em&gt; contain that. The ignore file also contains a list of file patterns to &lt;em&gt;not&lt;/em&gt; exclude.&lt;/p&gt;

&lt;p&gt;Say we have a blog full of Markdown files that we want to exclude from our image because they will be converted to HTML beforehand and are therefore unnecessary at runtime. But let’s say also that we still want to include the &lt;code&gt;README.md&lt;/code&gt; file from our project because it might help anyone looking inside a running container. We could achieve this goal with the following entries in a &lt;code&gt;.dockerignore&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# In .dockerignore
# Exclude all Markdown files:
#
*.md

# Now make an exception for one file:
#
!README.md
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With this file, when Docker starts the build process the only Markdown file that it will pass to the daemon is the &lt;code&gt;README.md&lt;/code&gt; file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exclude Everything
&lt;/h2&gt;

&lt;p&gt;So why don’t we use this feature to our advantage, and begin every &lt;code&gt;.dockerignore&lt;/code&gt; file with a statement to &lt;em&gt;exclude everything&lt;/em&gt;? If we start by saying that &lt;em&gt;nothing&lt;/em&gt; will be sent to the Docker daemon when building an image, it will be almost impossible for us to accidentally include anything that’s secret or unecessary.&lt;/p&gt;

&lt;p&gt;The pattern that we need in the &lt;code&gt;.dockerignore&lt;/code&gt; file to ignore everything is simply an asterisk:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# In .dockerignore
# Exclude everything:
#
*
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now of course, if we exclude everything when we’re building the image then it won’t just be our secrets that don’t get copied in…nothing will! It’s like being certain of not losing a race by not entering at all.&lt;/p&gt;

&lt;p&gt;So now we need to expand our &lt;code&gt;.dockerignore&lt;/code&gt; file with information about the files that we &lt;em&gt;do&lt;/em&gt; want to be included.&lt;/p&gt;

&lt;p&gt;And those files should only be those that are mentioned in our &lt;code&gt;Dockerfile&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Only Include Those Files Referenced In &lt;code&gt;Dockerfile&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s assume that we’re building a Node app. And let’s also assume that we’re following best practice in our &lt;code&gt;Dockerfile&lt;/code&gt; by installing all of the dependencies in one layer and our source code in another layer.&lt;/p&gt;

&lt;p&gt;The ‘install dependencies’ step might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# In Dockerfile
# Copy dependency definitions and then install:
#
COPY package.json .
ENV NODE_ENV=production
RUN npm install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;while the ‘copy source’ step could be as simple as this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# In Dockerfile
# Copy our app:
#
COPY app/ .
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When building the Docker image with this &lt;code&gt;Dockerfile&lt;/code&gt; the Docker daemon must have access to both the &lt;code&gt;package.json&lt;/code&gt; file and the &lt;code&gt;app&lt;/code&gt; sub-directory. And since we’ve excluded everything at the top of the ignore file, every file or directory referred to in the build process must be explicitly allowed through.&lt;/p&gt;

&lt;p&gt;We can easily add these files to our .&lt;code&gt;dockerignore&lt;/code&gt; file with a ‘don’t exclude’ pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# In .dockerignore
# Exclude everything:
#
*

# Now un-exclude package.json and the app folder:
#
!app
!package.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That’s it…we’re done. The Docker daemon won’t receive &lt;code&gt;.git&lt;/code&gt; or test directories, or &lt;code&gt;.env&lt;/code&gt;, or &lt;code&gt;.gitlab-ci.yml&lt;/code&gt;, or &lt;code&gt;Dockerfile&lt;/code&gt;, or &lt;code&gt;docker-compose.yml&lt;/code&gt; or anything else.&lt;/p&gt;

&lt;p&gt;Now all we have to do is keep the files &lt;code&gt;.dockerignore&lt;/code&gt; and &lt;code&gt;Dockerfile&lt;/code&gt; in sync–i.e., ensuring that they both refer to the same paths and files–and nothing else will accidentally get into our Docker images.&lt;/p&gt;

&lt;p&gt;One of the many neat features of this approach is that if we forget to add a rule we’ll hear about it. If we indicate in our &lt;code&gt;Dockerfile&lt;/code&gt; that we want to copy in the &lt;code&gt;README&lt;/code&gt; or the &lt;code&gt;LICENSE&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# In Dockerfile
# Copy our app and the license:
#
COPY app/ .
COPY LICENSE .
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;we’ll get an error at build time unless we add &lt;code&gt;LICENSE&lt;/code&gt; to the &lt;code&gt;.dockerignore&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# In .dockerignore
# Exclude everything:
#
*

# Now un-exclude package.json and the app folder:
#
!app
!LICENSE
!package.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In fact, the patterns that we need in our &lt;code&gt;.dockerignore&lt;/code&gt; file to specify what we want to &lt;em&gt;include&lt;/em&gt; are simply the &lt;code&gt;ADD&lt;/code&gt; and &lt;code&gt;COPY&lt;/code&gt; entries in our &lt;code&gt;Dockerfile&lt;/code&gt;, which is much simpler to manage than trying to keep track of what needs to be excluded.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problems With Approach #3
&lt;/h2&gt;

&lt;p&gt;Although this is my preferred solution and solves a lot of problems, there are still a couple of weaknesses. Although we’ll get notified by the build process if we are missing an important rule from our &lt;code&gt;.dockerignore&lt;/code&gt; file, we &lt;em&gt;won’t&lt;/em&gt; be notified if our rules are too liberal. So in our example above, if we later decide not to copy the &lt;code&gt;LICENSE&lt;/code&gt; file when building, there is nothing to tell us that we can remove the pattern from &lt;code&gt;.dockerignore&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;And also, we still need to remember not to leave anything in the &lt;code&gt;app&lt;/code&gt; directory that might get copied in. As it happens, the example I gave earlier of a &lt;code&gt;.env&lt;/code&gt; file (in Approach #1) is now less of an issue since it is best kept in the root directory anyway.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;I’ve tried a few different solutions to this problem over the years, but I think this ‘include file’ approach is about as minimal and easy to maintain as it gets. If the notion of ‘exclude everything and then un-exclude the files you want’ seems a bit quirky just flip things on their head and treat the &lt;code&gt;.dockerignore&lt;/code&gt; file as if it was an ‘include’ file–in other words see it as ‘only the files listed will be included in the build’.&lt;/p&gt;

&lt;p&gt;I think I’d prefer it if this was the default behaviour from Docker, because then we would get both security and minimal image size as the default behaviour, right out of the box. But unless a &lt;code&gt;.dockerinclude&lt;/code&gt; feature is added to Docker any time soon, then the solution described here is about as good as it will get.&lt;/p&gt;

</description>
      <category>docker</category>
    </item>
    <item>
      <title>Using _writev() to Create a Fast, Writable Stream for Elasticsearch</title>
      <dc:creator>You Know, For Devs</dc:creator>
      <pubDate>Mon, 29 Oct 2018 11:37:01 +0000</pubDate>
      <link>https://dev.to/youknowfordevs/using-writev-to-create-a-fast-writable-stream-for-elasticsearch-2ghb</link>
      <guid>https://dev.to/youknowfordevs/using-writev-to-create-a-fast-writable-stream-for-elasticsearch-2ghb</guid>
      <description>&lt;p&gt;We all know how great Node streams are. But it wasn’t until I recently needed to create (yet another) writable stream wrapper for Elasticsearch that I realised just how much work the streaming APIs can do for you. And in particular how powerful the &lt;code&gt;_writev()&lt;/code&gt; method is.&lt;/p&gt;

&lt;p&gt;I was looking to wrap the Elasticsearch client in a writable stream so that I could use it in a streaming pipeline. I’ve done this many times before, in many different contexts–such as creating Elasticsearch modules to be used with Gulp and Vinyl–so I was all set to follow the usual pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;my first step would be to set up an Elasticsearch client, using the Elasticsearch API;&lt;/li&gt;
&lt;li&gt;I’d then add a function that gets called with whatever entry should be written to the Elasticsearch server;&lt;/li&gt;
&lt;li&gt;to speed writing up I wouldn’t write this entry straight to the server, but instead buffer each of the entries in an array (the size of which would of course be configurable). Then, once the buffer was full the entries would be written en masse to the Elasticsearch server using the &lt;a href="https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/api-reference.html#api-bulk"&gt;bulk update API&lt;/a&gt; (which is much, much faster than writing records one at a time);&lt;/li&gt;
&lt;li&gt;when the source of the data for the writable stream indicates that there is no more data to send I’d check whether there is any data still in the buffer, and if so call a ‘flush’ function;&lt;/li&gt;
&lt;li&gt;and once all data is flushed, I’d delete the client.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of this will probably surprise you, and you’d no doubt write an interface to Elasticsearch in much the same way yourself.&lt;/p&gt;

&lt;p&gt;But what might surprise you–especially if you haven’t looked at &lt;a href="https://nodejs.org/api/stream.html#stream_writable_streams"&gt;Node’s Writable Streams&lt;/a&gt; for a while–is how many of these steps could be done for you by the Node libraries.&lt;/p&gt;

&lt;p&gt;To kick things off, let’s create a class that extends the &lt;a href="https://nodejs.org/api/stream.html#stream_class_stream_writable"&gt;Node stream &lt;code&gt;Writable&lt;/code&gt; class&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;stream&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nx"&gt;ElasticsearchWritableStream&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Writable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ElasticsearchWritableStream&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we can start adding each of the features in our list.&lt;/p&gt;

&lt;h1&gt;
  
  
  Creating an Elasticsearch Client
&lt;/h1&gt;

&lt;p&gt;The first step we described above was to create an Elasticsearch client, using the &lt;a href="https://www.npmjs.com/package/elasticsearch"&gt;Elasticsearch API&lt;/a&gt;, so let’s add that to the constructor of our class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;stream&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;elasticsearch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;elasticsearch&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nx"&gt;ElasticsearchWritableStream&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Writable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;

    &lt;span class="cm"&gt;/**
     * Create the Elasticsearch client:
     */&lt;/span&gt;

    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;elasticsearch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;host&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ElasticsearchWritableStream&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We can now call our class with some configuration, and we’ll have a writable stream with an Elasticsearch client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sink&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;ElasticsearchWriteStream&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;es:9200&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Of course, this stream doesn’t do anything yet, so let’s add the method that the streaming infrastructure will call whenever some other stream wants to write a record.&lt;/p&gt;

&lt;h1&gt;
  
  
  Writing Records
&lt;/h1&gt;

&lt;p&gt;When implementing a writable stream class, the only method we need to provide is &lt;a href="https://nodejs.org/api/stream.html#stream_writable_write_chunk_encoding_callback_1"&gt;&lt;code&gt;_write()&lt;/code&gt;&lt;/a&gt; which is called whenever new data is available from the stream that is providing that data. In the case of our Elasticsearch stream, to forward the record on we only need to call &lt;a href="https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/api-reference.html#api-index"&gt;&lt;code&gt;index()&lt;/code&gt;&lt;/a&gt; on the client that we created in the constructor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nx"&gt;ElasticsearchWritableStream&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Writable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;...&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="cm"&gt;/**
   * When writing a single record, we use the index() method of
   * the ES API:
   */&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;_write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;enc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="cm"&gt;/**
     * Push the object to ES and indicate that we are ready for the next one.
     * Be sure to propagate any errors:
     */&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nx"&gt;body&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note that once we’ve successfully written our record we then call &lt;code&gt;next()&lt;/code&gt; to indicate to the streaming infrastructure that we’re happy to receive more records, i.e., more calls to &lt;code&gt;_write()&lt;/code&gt;. In fact, if we &lt;em&gt;don’t&lt;/em&gt; call &lt;code&gt;next()&lt;/code&gt; we won’t receive any more data.&lt;/p&gt;

&lt;h1&gt;
  
  
  Index and Type
&lt;/h1&gt;

&lt;p&gt;When writing to Elasticsearch we need to provide the name of an index and a type for the document, so we’ve added those to the config that was provided to the constructor, and we can then pass these values on to the call to &lt;code&gt;index()&lt;/code&gt;. We’ll now need to invoke our stream with something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sink&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;ElasticsearchWriteStream&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;es:9200&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my-index&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my-type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Buffering
&lt;/h1&gt;

&lt;p&gt;As things stand, we already have a working writable stream for Elasticsearch. However, if we’re planning to insert hundreds of thousands of records then it will be slow, and a simple optimisation would be to buffer the records and use the &lt;a href="https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/api-reference.html#api-bulk"&gt;bulk update API&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bulk Update API
&lt;/h2&gt;

&lt;p&gt;The bulk update API allows us to perform many operations at the same time, perhaps inserting thousands of records in one go. Rather than defining each record to be inserted as we did with the &lt;code&gt;index()&lt;/code&gt; call, we need to create &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/6.3/docs-bulk.html"&gt;a list that contains pairs of entries&lt;/a&gt;; one that indicates the operation to carry out–such as an insert or update–and one that contains the data for the operation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using an Array
&lt;/h2&gt;

&lt;p&gt;The usual ‘go to’ implementation here would be to create an array in the class constructor, and then push the rows of data into that array with each call to &lt;code&gt;_write()&lt;/code&gt;. Then, when the array is full, construct a call to the bulk API, still within the &lt;code&gt;_write()&lt;/code&gt; method.&lt;/p&gt;

&lt;p&gt;The problem here though, is that in order to properly implement backpressure we need quite a sophisticated interaction with the &lt;code&gt;next()&lt;/code&gt; function; we need to allow data to flow to our stream as long as the buffer is not full, and we need to prevent new data arriving until we’ve had a chance to write the records to Elasticsearch.&lt;/p&gt;

&lt;p&gt;It turns out that the Node streaming API can manage the buffer &lt;em&gt;and&lt;/em&gt; the backpressure for us.&lt;/p&gt;

&lt;h2&gt;
  
  
  _writev()
&lt;/h2&gt;

&lt;p&gt;Although the bare minimum we need to provide in our writable stream class is a &lt;code&gt;_write()&lt;/code&gt; method, there is another method we can create if we like, called &lt;a href="https://nodejs.org/api/stream.html#stream_writable_writev_chunks_callback"&gt;&lt;code&gt;_writev()&lt;/code&gt;&lt;/a&gt;. Where the first function is called once per record, the second is called with a &lt;em&gt;list&lt;/em&gt; of records. In a sense, the streaming API is doing the whole &lt;em&gt;create an array and store the items until the array is full and then send them on&lt;/em&gt; bit for us.&lt;/p&gt;

&lt;p&gt;Here’s what our &lt;code&gt;_writev()&lt;/code&gt; method would look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nx"&gt;ElasticsearchWritableStream&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Writable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;_writev&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;chunks&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="cm"&gt;/**
       * Each entry to the bulk API comprises an instruction (like 'index'
       * or 'delete') on one line, and then some data on the next line:
       */&lt;/span&gt;

      &lt;span class="nx"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="nx"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;arr&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[])&lt;/span&gt;

    &lt;span class="cm"&gt;/**
     * Push the array of actions to ES and indicate that we are ready
     * for more data. Be sure to propagate any errors:
     */&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bulk&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nx"&gt;body&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The streaming API will buffer records and then at a certain point hand them all over to our &lt;code&gt;_writev()&lt;/code&gt; function. This gives us the main benefit of buffering data–that we can then use the bulk update API–without actually having to create and manage a buffer, or look after backpressure.&lt;/p&gt;

&lt;h1&gt;
  
  
  Buffer Size
&lt;/h1&gt;

&lt;p&gt;If we’d created the buffer ourselves we’d have had complete control over how big the buffer is, but can we still control the buffer size if the Node streaming API is managing the buffer for us?&lt;/p&gt;

&lt;p&gt;It turns out we can, by using the &lt;a href="https://nodejs.org/api/stream.html#stream_constructor_new_stream_writable_options"&gt;generic &lt;code&gt;highWaterMark&lt;/code&gt; feature&lt;/a&gt;, which is used throughout the streams API to indicate how large buffers should be.&lt;/p&gt;

&lt;p&gt;The best way to implement this in our writable stream is to have two parameters for our constructor:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one which will provide configuration for the Elasticsearch connection, such as server address, timeout configuration, the name of the index and type, and so on;&lt;/li&gt;
&lt;li&gt;another which provides settings for the writable stream itself, such as &lt;code&gt;highWaterMark&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is easily added, like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nx"&gt;ElasticsearchWritableStream&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Writable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;

    &lt;span class="cm"&gt;/**
     * Create the Elasticsearch client:
     */&lt;/span&gt;

    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;elasticsearch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;host&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And now we can control the size of the buffer–and hence, the number of records that are being written by each call to the bulk API–by setting options in the constructor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;esConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;es:9200&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my-index&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my-type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sink&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;ElasticsearchWriteStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;esConfig&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;highWatermark&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Closing the Elasticsearch Client
&lt;/h1&gt;

&lt;p&gt;All that remains from our original checklist is to close the client when there is no more data to receive. To implement this, all we need to do is to add another optional method, &lt;a href="https://nodejs.org/api/stream.html#stream_writable_destroy_err_callback"&gt;&lt;code&gt;_destroy()&lt;/code&gt;&lt;/a&gt;. This is called by the streaming infrastructure when there is no more data, and would look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;_destroy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;As you can see, the Node streaming API has done much of the work of buffering, for us, which means that we don’t get bogged down with trying to implement backpressure properly. By providing us with the methods &lt;code&gt;_write()&lt;/code&gt;, &lt;code&gt;_writev()&lt;/code&gt; and &lt;code&gt;_destroy()&lt;/code&gt; our code ends up very clean, and focuses our attention on only the parts required to spin up and destroy a connection to Elasticsearch, and the functions required to write a single record, or a batch. The full implementation looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;stream&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;elasticsearch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;elasticsearch&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nx"&gt;ElasticsearchWritableStream&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Writable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;

    &lt;span class="cm"&gt;/**
     * Create the Elasticsearch client:
     */&lt;/span&gt;

    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;elasticsearch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;host&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;_destroy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="cm"&gt;/**
   * When writing a single record, we use the index() method of
   * the ES API:
   */&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;_write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;enc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="cm"&gt;/**
     * Push the object to ES and indicate that we are ready for the next one.
     * Be sure to propagate any errors:
     */&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nx"&gt;body&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;_writev&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;chunks&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="cm"&gt;/**
       * Each entry to the bulk API comprises an instruction (like 'index'
       * or 'delete') and some data:
       */&lt;/span&gt;

      &lt;span class="nx"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="nx"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;arr&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[])&lt;/span&gt;

    &lt;span class="cm"&gt;/**
     * Push the array of actions to ES and indicate that we are ready
     * for more data. Be sure to propagate any errors:
     */&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bulk&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nx"&gt;body&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ElasticsearchWritableStream&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



</description>
      <category>elasticsearch</category>
      <category>node</category>
      <category>streams</category>
    </item>
    <item>
      <title>Using Disqus with Netlify</title>
      <dc:creator>You Know, For Devs</dc:creator>
      <pubDate>Fri, 20 Jul 2018 11:37:01 +0000</pubDate>
      <link>https://dev.to/youknowfordevs/using-disqus-with-netlify-2ejc</link>
      <guid>https://dev.to/youknowfordevs/using-disqus-with-netlify-2ejc</guid>
      <description>&lt;p&gt;To use Disqus on Netlify you can either let Netlify do the work of embedding the required snippet, or get your static-site generator (SSG) to take the strain.&lt;/p&gt;

&lt;h1&gt;
  
  
  Netlify
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y7zoe1OX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/netlify.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y7zoe1OX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/netlify.svg" alt="Netlify Logo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Netlify is great. In the &lt;a href="https://dev.to/2012/02/24/choosing-a-blogging-platform/"&gt;search for the ideal blogging platform&lt;/a&gt; it has the perfect architecture.&lt;/p&gt;

&lt;p&gt;First, you write all of your posts in Markdown and keep them in a Git repo, and then use a static-site generator (SSG) to build your blog. Of course you do, you might say…where’s the news?&lt;/p&gt;

&lt;p&gt;Well, second, whilst you might be used to running your SSG locally and then pushing the generated HTML to somewhere like S3 or a CDN, with Netlify that’s done for you, by connecting Netlify to your GitHub, GitLab or Bitbucket repo.&lt;/p&gt;

&lt;p&gt;Yawn…you can do that with GitHub Pages, you tell me.&lt;/p&gt;

&lt;p&gt;But here is where it gets interesting; Netlify has built a whole set of features into its build process that can help you manage the &lt;em&gt;generated&lt;/em&gt; blog without having to modify the repo itself. You can add forms and logins to your blog, for example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ifx7xrUT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/aws-lambda.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ifx7xrUT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/aws-lambda.jpeg" alt="AWS Lambda Logo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if you want to take the site beyond a blog and into application territory you can even add AWS Lamda Functions.&lt;/p&gt;

&lt;h1&gt;
  
  
  Using Netlify Snippets
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o3Em4xqr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/disqus.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o3Em4xqr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/disqus.svg" alt="Disqus Logo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The easiest way to get Disqus going in Netlify is to create a snippet. Simply paste the code from Disqus’s instructions for the &lt;a href="https://help.disqus.com/installation/universal-embed-code"&gt;universal embed code&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are two problems with this, though.&lt;/p&gt;

&lt;p&gt;The first is that Netlify only supports placing snippets before the closing &lt;code&gt;head&lt;/code&gt; tag or before the closing &lt;code&gt;body&lt;/code&gt; tag. However, for Disqus to look its best in our blog it would ideally be placed before the &lt;em&gt;footer&lt;/em&gt;. In the case of this blog (which is generated by Jekyll) that’s just before the closing &lt;code&gt;article&lt;/code&gt; tag.&lt;/p&gt;

&lt;p&gt;The second issue is that in order to solve the &lt;a href="https://help.disqus.com/troubleshooting/use-configuration-variables-to-avoid-split-threads-and-missing-comments"&gt;split threads and missing comments problem&lt;/a&gt; we’d like to be able to insert the page URL and a unique identifier into the embedded script. This is not possible in Netlify snippets though, since there is no support for variable substitution.&lt;/p&gt;

&lt;p&gt;However, neither of these issues is a show-stopper; having the Disqus comments placed at the end of the page rather than before the footer looks a bit odd but not terrible. And I’ve seen many sites that don’t bother to put the page URL and identifier values into the Disqus script–probably because they have canonical URLs for their site pages and don’t suffer from the ambiguity that can cause the ‘split threads’ problem.&lt;/p&gt;

&lt;p&gt;So if neither the layout or canonical URLs problems are an issue for you then using Netlify snippets is certainly the best way to go, because it is &lt;em&gt;completely independent of the SSG used to generate the site&lt;/em&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Using the SSG
&lt;/h1&gt;

&lt;p&gt;The other approach to incorporating Disqus is to get your SSG to do the work. Most SSGs will let you trigger inclusion of the snippet required with just a little bit of configuration. In the case of Jekyll it’s a simple case of adding the shortname for the particular Disqus configuration for your site, to the &lt;code&gt;_config.yml&lt;/code&gt; file. In my case it looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;.&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;span class="na"&gt;disqus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;shortname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;you-know-for-devs&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Jekyll won’t add the correct snippet in preview sites but it will when generating a site in production mode. This is easily done in Netlify by setting &lt;code&gt;JEKYLL_ENV&lt;/code&gt; to the value &lt;code&gt;production&lt;/code&gt; in the &lt;em&gt;Build environment variables&lt;/em&gt; section of your site’s &lt;em&gt;Settings&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--87vfF_91--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/set-jekyll_env-netlify.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--87vfF_91--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/set-jekyll_env-netlify.png" alt="Set JEKYL_ENV in Netlify"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the config and environment variable are added the result is that the correct snippet is embedded by Jekyll at the end of the &lt;code&gt;article&lt;/code&gt; tag (looking much neater than at the end of &lt;code&gt;body&lt;/code&gt;) and the page’s URL and a unique identifier are added to the configuration used to communicate with the Disqus back-end.&lt;/p&gt;

&lt;p&gt;One thing that caught me out for a little while is that if there is no site URL set in the config then the embedded script fails to make the connection with Disqus. It’s a simple fix; just make sure to have &lt;code&gt;url&lt;/code&gt; set in &lt;code&gt;_config.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;.&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://youknowfordevs.com"&lt;/span&gt; &lt;span class="c1"&gt;# the base hostname &amp;amp; protocol for your site, e.g. http://example.com&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That’s it. Commit and push your configuration changes to your repo and let Netlify do its magic–a few seconds later you have Disqus comments on your site.&lt;/p&gt;

</description>
      <category>netlify</category>
      <category>disqus</category>
      <category>staticsitegeneration</category>
    </item>
    <item>
      <title>Your Development Workflow Just Got Better, With Docker Compose</title>
      <dc:creator>You Know, For Devs</dc:creator>
      <pubDate>Mon, 02 Jul 2018 13:30:01 +0000</pubDate>
      <link>https://dev.to/youknowfordevs/your-development-workflow-just-got-better-with-docker-compose-4mcl</link>
      <guid>https://dev.to/youknowfordevs/your-development-workflow-just-got-better-with-docker-compose-4mcl</guid>
      <description>&lt;p&gt;In a previous post we saw how to &lt;a href="///2018/07/01/dont-install-node-until-youve-read-this.html"&gt;set up our basic Node development environment using Docker&lt;/a&gt;. Our next step is reduce the size of these unwieldy &lt;code&gt;docker run&lt;/code&gt; commands. This is not just because of their unwieldiness but also because if we just type them from the command-line then we don’t have an easy way to share what we’re doing–not just with other people but with ourselves, tomorrow, when we’ve inevitably forgotten what we were doing today!&lt;/p&gt;

&lt;p&gt;So before we forget the command we were running in the previous post, let’s lock it down in a file that we can use repeatedly.&lt;/p&gt;

&lt;p&gt;But in what file, you ask?&lt;/p&gt;

&lt;h1&gt;
  
  
  Docker Compose
&lt;/h1&gt;

&lt;p&gt;The tool we’re going to use to capture these kinds of commands is Docker Compose. This app will have been installed for you when you installed Docker (assuming that you took the advice of our previous post to &lt;a href="///2018/07/01/dont-install-node-until-youve-read-this.html#docker-first"&gt;embrace Docker&lt;/a&gt;). Docker Compose is an &lt;em&gt;incredibly&lt;/em&gt; handy utility because it allows us to use a YAML file to create definitions for Docker commands, rather than having to use command-line options. This means we can easily share and version our commands.&lt;/p&gt;

&lt;p&gt;The YAML file can also be used to manage a group of containers that we want to launch at the same time–perhaps our microservice needs a MySQL database or a RabbitMQ queue–and as if that wasn’t enough, the same file format can also be used to describe a Docker swarm stack–a collection of services that will all run together–when it comes time to deploy our application.&lt;/p&gt;

&lt;p&gt;Just as in the previous post we suggested that applications should no longer be installed locally but instead run inside Docker containers, now we want to just as strongly argue that no activity that can be performed in the creation of your application–whether linting, testing, packaging, deploying–should be carried out without it being captured in a Docker Compose file.&lt;/p&gt;

&lt;p&gt;But before we get too excited, let’s go back to the command we were running in the earlier post (which launches a development container in which we run Node) and let’s convert it to use Docker Compose.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Docker Compose Configuration File
&lt;/h2&gt;

&lt;p&gt;Recall that the command we were running was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:/usr/src/app &lt;span class="nt"&gt;-p&lt;/span&gt; 127.0.0.1:3000:3000 &lt;span class="se"&gt;\&lt;/span&gt;
  node:10.5.0-alpine /bin/sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To turn this into a Docker Compose file, fire up your favourite editor and create a file called &lt;code&gt;docker-compose.yml&lt;/code&gt; into which you’ve placed the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.2"&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;dev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node:10.5.0-alpine&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;127.0.0.1:3000:3000"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.:/usr/src/app&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can probably figure out which parts of the original command-line map to which entries in this Compose file, so we’ll just flag up a couple of things that might not be immediately obvious.&lt;/p&gt;

&lt;p&gt;First, the entry &lt;code&gt;dev&lt;/code&gt; is just the name of our &lt;em&gt;service&lt;/em&gt;. It can be anything we like, and there can be more than one of these entries in a file. We’ll see in a moment how it’s used to indicate what we want to launch.&lt;/p&gt;

&lt;p&gt;(A service is the term Docker Compose uses to describe running containers. The reason it doesn’t use the term &lt;em&gt;container&lt;/em&gt; in the way that we would if we were using the &lt;code&gt;docker run&lt;/code&gt; command is that a service has extra features such as being able to comprise more than one instance of a container.)&lt;/p&gt;

&lt;p&gt;Next you probably noticed that the port mapping now has quotes around it; on the command line we had &lt;code&gt;-p 127.0.0.1:3000:3000&lt;/code&gt; whilst in the compose file we have &lt;code&gt;"127.0.0.1:3000:3000"&lt;/code&gt;. This is a best practice due to the way that YAML files are processed. If a port lower than 60 is mapped and no IP address is specified (for example, &lt;code&gt;40:40&lt;/code&gt;) then the parser will not treat it as &lt;code&gt;40&lt;/code&gt; followed by &lt;code&gt;40&lt;/code&gt;, but as a base 60 number. You &lt;em&gt;could&lt;/em&gt; just remember that you need quotes when using ports below 60, but most Docker Compose files you’ll see will have quotes placed around &lt;em&gt;any&lt;/em&gt; port number, which is a little easier to remember.&lt;/p&gt;

&lt;p&gt;Finally, you will also have spotted that the &lt;code&gt;${PWD}&lt;/code&gt; part of our &lt;code&gt;docker run&lt;/code&gt; command has now been replaced with &lt;code&gt;.&lt;/code&gt;, i.e., the current directory. Docker Compose doesn’t need the environment variable when mapping volumes, which makes things a bit easier. Paths in the YAML file are always relative to the file itself (and relative paths are supported).&lt;/p&gt;

&lt;h2&gt;
  
  
  Launching Our Development Container
&lt;/h2&gt;

&lt;p&gt;Now we have our configuration set up it’s a simple matter of running the Docker Compose command with the name of our service. Run the following command and you should have launched the development environment again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--service-ports&lt;/span&gt; dev 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Ok…so it’s still not the shortest command on the block–we’ll see in a future post how we can get this down further. But it’s a lot easier to remember than the long &lt;code&gt;docker run&lt;/code&gt; command we had before. And what’s more, it will &lt;em&gt;always be the same&lt;/em&gt; no matter what changes you make to the configuration file; any additional options we want to add to our &lt;code&gt;docker run&lt;/code&gt; will go in our Docker Compose file, clearly documented and under source control.&lt;/p&gt;

&lt;p&gt;Just to wrap up this section, we’ll quickly explain the parameters that we need to pass to &lt;code&gt;docker-compose run&lt;/code&gt;. The first is &lt;code&gt;--rm&lt;/code&gt; which is exactly the same as the option we were using with &lt;code&gt;docker run&lt;/code&gt;–when the command has finished running our container will be deleted.&lt;/p&gt;

&lt;p&gt;The second is &lt;code&gt;--service-ports&lt;/code&gt; which instructs Docker Compose to make available any port mappings we define in the Compose file. It’s a little annoying to have to add this parameter, and you’ll find many discussion threads that argue that this behaviour should be the default. But the logic is fair; if we are launching a number of connected services, such as a web server and a MySQL database, we don’t necessarily want every single port to be mapped to our host machine. In the example of a web server and MySQL server for example, there is no need to expose MySQL’s port &lt;code&gt;3306&lt;/code&gt; on our laptop since it is only needed by the web server connection to the backend. Docker Compose will create a network that the web server and MySQL can use to communicate with each other.&lt;/p&gt;

&lt;p&gt;So there we have it; run that command, and we will get a shell prompt, and then we can launch our web server in exactly the same way as we did in the previous post, when using &lt;code&gt;docker run&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /usr/src/app
node app.js
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Working Directory
&lt;/h2&gt;

&lt;p&gt;We said a moment ago that one of the advantages of using Docker Compose is that we can add additional options without changing the way we run the command. An example would be to get Docker to change to the working directory for us, i.e, to remove the need for the &lt;code&gt;cd /usr/src/app&lt;/code&gt; step in our sequence, above.&lt;/p&gt;

&lt;p&gt;To do this we only need to add the &lt;code&gt;working_dir&lt;/code&gt; option to the YAML file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.2"&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;dev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node:10.5.0-alpine&lt;/span&gt;
    &lt;span class="na"&gt;working_dir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/src/app&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:3000"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.:/usr/src/app&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And to stress again, we still launch our development environment in exactly the same way as we did before–the only changes are to the configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--service-ports&lt;/span&gt; dev 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This time our command-line prompt will have us sitting in the correct directory, and we can launch the server directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;node app.js
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Changing Launch Commands
&lt;/h2&gt;

&lt;p&gt;But we can go a bit further here; we’ll rarely need to be ‘inside’ the container doing stuff, since we’ll be using our favourite editor running on our laptop (remember &lt;a href="///2018/07/01/dont-install-node-until-youve-read-this.html#a-development-environment"&gt;we’ve mapped our project directory into the container so that our laptop and the container both have access to our files&lt;/a&gt;). So we’ll probably find ourselves more often than not invoking our container and then running the server. So we could change the command that is run inside the container from one that launches a Bash shell to one that launches the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.2"&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;dev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node:10.5.0-alpine&lt;/span&gt;
    &lt;span class="na"&gt;working_dir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/src/app&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:3000"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.:/usr/src/app&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;node&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;app.js"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Making a Clean Exit
&lt;/h3&gt;

&lt;p&gt;You probably spotted that the command we added was not what we might have expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;node"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;app.js"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;but:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;node&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;app.js"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The background as to why is that if we use the first version of the command which simply runs &lt;code&gt;node&lt;/code&gt; with &lt;code&gt;app.js&lt;/code&gt; as the parameter, then when we try to exit the server with &lt;code&gt;[CTRL]+C&lt;/code&gt; nothing will happen and we’ll have to find some other way to kill the server. This is because the Node app does not process a &lt;code&gt;SIGTERM&lt;/code&gt; signal (a &lt;code&gt;[CTRL]+C&lt;/code&gt;) correctly when Node is running as the primary, top-level application in a container (what you’ll often see described as &lt;em&gt;running as PID 1&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;However the Bash shell &lt;em&gt;does&lt;/em&gt; handle the whole &lt;code&gt;SIGTERM&lt;/code&gt; dance correctly, and will cleanly shut down our server when it receives &lt;code&gt;[CTRL]+C&lt;/code&gt;. So all we need to do is run our server inside a shell.&lt;/p&gt;

&lt;p&gt;If you need (or want) to understand this in more detail then &lt;a href="https://www.google.co.uk/search?q=pid+1+docker+node"&gt;search online for something along the lines of “pid 1 docker node”&lt;/a&gt; and you’ll find a number of articles. If you just want to cut to the chase then read the section &lt;a href="https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md#handling-kernel-signals"&gt;Handling Kernel Signals&lt;/a&gt; in the &lt;a href="https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md"&gt;best practises guidance for using Node in Docker&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multiple Services
&lt;/h2&gt;

&lt;p&gt;Of course, if we think we might need &lt;em&gt;both&lt;/em&gt; of these commands–the one to launch a Bash shell inside the container, ready for playing around, and the one to launch the server–then instead of overwriting our first, we can just add a second entry to our Docker Compose file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.2"&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node:10.5.0-alpine&lt;/span&gt;
    &lt;span class="na"&gt;working_dir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/src/app&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:3000"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.:/usr/src/app&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="na"&gt;serve&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node:10.5.0-alpine&lt;/span&gt;
    &lt;span class="na"&gt;working_dir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/src/app&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:3000"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.:/usr/src/app&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;node&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;app.js"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We’ve changed the name of the shell version from &lt;code&gt;dev&lt;/code&gt; to &lt;code&gt;shell&lt;/code&gt; to indicate what it’s used for, which means we can now launch the server with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--service-ports&lt;/span&gt; serve
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Don’t Repeat Yourself
&lt;/h3&gt;

&lt;p&gt;One last tip involves a way to reuse the common settings we have in our file. As you can see the only difference between our two services is in the &lt;code&gt;command&lt;/code&gt; value. Ideally we’d like to place all of the other values into some common collection and share them between both services.&lt;/p&gt;

&lt;p&gt;This is possible in &lt;a href="https://docs.docker.com/compose/compose-file/"&gt;version 3.4 onwards of the Docker Compose file format&lt;/a&gt; by using YAML anchors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.4"&lt;/span&gt;
&lt;span class="na"&gt;x-default-service-settings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="nl"&gt;&amp;amp;default-service-settings&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node:10.5.0-alpine&lt;/span&gt;
    &lt;span class="na"&gt;working_dir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/src/app&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:3000"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.:/usr/src/app&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*default-service-settings&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="na"&gt;serve&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*default-service-settings&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;node&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;app.js"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So note first that the &lt;code&gt;version&lt;/code&gt; value has been updated at the top of the document. Then, any block that we want to create for sharing goes at the top level with an &lt;code&gt;x-&lt;/code&gt; prefix–that’s how we tell Docker Compose not to process this block as some configuration.&lt;/p&gt;

&lt;p&gt;Within the custom block we set an anchor (the &lt;code&gt;&amp;amp;default-service-settings&lt;/code&gt; part) and give it any name we want. Then finally we can refer to that block by referencing the anchor with the &lt;code&gt;&amp;lt;&amp;lt;&lt;/code&gt; syntax.&lt;/p&gt;

&lt;h1&gt;
  
  
  Next Steps
&lt;/h1&gt;

&lt;p&gt;We’ve taken our &lt;a href="///2018/07/01/dont-install-node-until-youve-read-this.html"&gt;original &lt;code&gt;docker run&lt;/code&gt; command&lt;/a&gt; and converted it to use Docker Compose, making complex configurations much easier to manage. We’ve also added some additional commands to help with our development process. And we also now have a way to keep a collection of commands under source control. We can now build on this approach to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;add more directory mappings so that modules installed with &lt;code&gt;npm install&lt;/code&gt; stay &lt;em&gt;inside&lt;/em&gt; our container;&lt;/li&gt;
&lt;li&gt;add entries for test containers that include runners like Mocha or TAP;&lt;/li&gt;
&lt;li&gt;add entries for commands that help the build process, for example using Webpack or Parcel;&lt;/li&gt;
&lt;li&gt;launch local Nginx servers that will mirror our live deployments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’ll drill into these techniques and more in future posts.&lt;/p&gt;

&lt;p&gt;Good luck with your projects!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockercompose</category>
      <category>node</category>
    </item>
    <item>
      <title>Don’t Install Node Until You’ve Read This (Or, How to Run Node the Docker Way)</title>
      <dc:creator>You Know, For Devs</dc:creator>
      <pubDate>Sun, 01 Jul 2018 13:30:01 +0000</pubDate>
      <link>https://dev.to/youknowfordevs/don-t-install-node-until-you-ve-read-this-or-how-to-run-node-the-docker-way-38im</link>
      <guid>https://dev.to/youknowfordevs/don-t-install-node-until-you-ve-read-this-or-how-to-run-node-the-docker-way-38im</guid>
      <description>&lt;p&gt;We need Node for some application or other–perhaps we’re creating a microservice or just want to follow along with a tutorial.&lt;/p&gt;

&lt;p&gt;But most places you start with suggest that the first step is to install Node for your operating system. Perhaps you’re on a Mac so now you have to start thinking about whether you should also install Homebrew or MacPorts.&lt;/p&gt;

&lt;p&gt;Or you’re on Ubuntu so you head in the &lt;code&gt;apt-get&lt;/code&gt; direction…except before you know it, to get the latest version you find yourself using &lt;code&gt;curl&lt;/code&gt; to pipe some script to your shell.&lt;/p&gt;

&lt;p&gt;Windows? You could just use the Windows installer but as with macOS you ponder whether it’s time to embrace the Chocalatey or Scoop package managers.&lt;/p&gt;

&lt;p&gt;In this blog post we’ll look at how skipping over all of this and heading straight to a Docker environment makes it much easier to manage your Node applications and development workflow, and whats more gets you going with best practices right from the start.&lt;/p&gt;

&lt;h1&gt;
  
  
  Docker First
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---Kb3PD8v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/docker.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---Kb3PD8v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/docker.png" alt="Docker Logo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whichever route we go with installing Node the OS-specific way, we now have two problems; the first is that the way that we install Node is different on each platform, and damn, that’s annoying. And number two we now have Node installed &lt;em&gt;globally&lt;/em&gt; on our laptop. Why so sad? Well now if we want to use different versions of Node for different projects we have to faff about with something like &lt;code&gt;nvm&lt;/code&gt;. (And if you were planning on running a Python project it’s the same story, with &lt;code&gt;virtualenv&lt;/code&gt;.)&lt;/p&gt;

&lt;p&gt;So do yourself a favour and &lt;a href="https://docs.docker.com/install/"&gt;get Docker installed&lt;/a&gt;. True, how you install Docker will also be different for different platforms–Ubuntu is slightly different to Mac and Windows. But this initial effort will repay you later because now you’ll have a &lt;em&gt;standard&lt;/em&gt; way to install Node, Ruby, Python, TensorFlow, R…whatever language you’re using for your projects–or perhaps more likely nowadays, &lt;em&gt;languages&lt;/em&gt;–just got way easier to manage.&lt;/p&gt;

&lt;p&gt;So assuming that you now have Docker, let’s get a development environment set up so that you can get back to that tutorial or project.&lt;/p&gt;

&lt;h1&gt;
  
  
  Running Node
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_XFLMPyp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/node.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_XFLMPyp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/node.png" alt="Node Logo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, create a new directory for your project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;new-project &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;new-project
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;and then launch the latest version of Node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; node:10.5.0-alpine
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you haven’t run this version of Node before, Docker will download it for you. After a bit of toing and froing you’ll be left with the usual Node command prompt. Type something like &lt;code&gt;5+6&lt;/code&gt; and press return to check all is well, and then press &lt;code&gt;[CTRL]+D&lt;/code&gt; to exit.&lt;/p&gt;

&lt;p&gt;If you are reading this in the future you might want to find out what the most recent version number is; just head to the &lt;a href="https://hub.docker.com/_/node/"&gt;Docker Hub page for the official Node Docker image&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interactive Containers
&lt;/h2&gt;

&lt;p&gt;We executed the &lt;code&gt;docker run&lt;/code&gt; command with a couple of options. The first–the &lt;code&gt;-it&lt;/code&gt; part–is a combination of the two options, &lt;code&gt;-i&lt;/code&gt; and &lt;code&gt;-t&lt;/code&gt;. It’s these options together that mean we can interact with the running container as if it was our normal shell, accepting input from our keyboard and sending output to our display.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disposable Containers
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;--rm&lt;/code&gt; option causes the container to be deleted when we exit. It’s a good habit to get into to delete containers as we go along, because it gets us into the mindset that our containers are &lt;em&gt;disposable&lt;/em&gt;. This is particularly important when it comes to deployment because we don’t want our container to hold any state internally–any updates or processing should result in writes to external services such as a connected file system, cloud storage, queues, and so on. By taking this approach it’s really easy to upgrade our images to newer versions when necessary–we just throw away the old ones and launch completely new ones.&lt;/p&gt;

&lt;p&gt;(It will also make it easier to scale, since we can just launch a bunch more containers when we need to do more work, and provided that all state is maintained &lt;em&gt;outside&lt;/em&gt; of the containers this becomes straightforward.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Bonus Points: No SSH
&lt;/h3&gt;

&lt;p&gt;If you really want to get into good habits with your Docker containers then also avoid the tempation to SSH into a running container to see what’s going on. There’s nothing worse than making a tweak to fix something, logging out, and then forgetting what was changed. The service may now be running again and your boss thinks your are flavour of the month, but it’s fragile. Deploy again and you overwrite those changes. Far better to fix the problem in your deployment scripts, then simply tear down the faulty service and launch another. The changes are now clear to see in source control and reproducible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Versions
&lt;/h2&gt;

&lt;p&gt;Beyond the command-line options to &lt;code&gt;docker run&lt;/code&gt;, there are also a few things to note about the Node Docker image that we’ve used (the &lt;code&gt;node:10.5.0-alpine&lt;/code&gt; part).&lt;/p&gt;

&lt;p&gt;First, it’s worth being specific about the version number of Node that you are using, since it makes it easier to force updates and to know what is being deployed. If we were to only specify ‘version 10’:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; node:10-alpine
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;or even ‘the latest version of node’:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; node:alpine
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;then although on the first time through we’ll get &lt;code&gt;10.5.0&lt;/code&gt;, once the images are updated at some later point, we won’t pick up the same version on subsequent runs. At some point using &lt;code&gt;node:10-alpine&lt;/code&gt; in the command will cause us to pick up version &lt;code&gt;10.6.0&lt;/code&gt; or &lt;code&gt;10.7.0&lt;/code&gt; of Node. And using &lt;code&gt;node:alpine&lt;/code&gt; will at some point cause us to get version &lt;code&gt;11&lt;/code&gt; and onwards.&lt;/p&gt;

&lt;p&gt;However, if we choose a specific version like &lt;code&gt;10.5.0&lt;/code&gt; then although we also won’t get updates automatically, it will be a simple case of updating to &lt;code&gt;10.5.1&lt;/code&gt; in our build files, when we are ready to force a download of the latest changes.&lt;/p&gt;

&lt;p&gt;This is particularly important when it comes to deploying applications later on (or sharing your code with other people), since you want to be able to control what version appears where. And perhaps more to the point, when you are troubleshooting you want to know for sure what version was used.&lt;/p&gt;

&lt;h3&gt;
  
  
  Controlled Updates
&lt;/h3&gt;

&lt;p&gt;It’s tempting of course to want to ‘always use the latest’; after all, the latest will be faster, won’t it? And won’t it have the latest security patches? This is true of course, but in the quest for building a reliable infrastructure you should aim to &lt;em&gt;control&lt;/em&gt; updates to the foundations. This means that if you have a bunch of code that is working fine on version &lt;code&gt;10.5.0&lt;/code&gt;, nicely passing all of its tests and performing well, then a move to another version of Node should be something that is planned and tested. The only &lt;em&gt;real&lt;/em&gt; pressure to move versions comes with the point releases such as &lt;code&gt;10.5.1&lt;/code&gt; or &lt;code&gt;10.5.2&lt;/code&gt;, since they will contain security patches and bug fixes; a move to &lt;code&gt;10.6&lt;/code&gt; or higher is certainly a ‘nice to have’, but if your code is working and your service is running, then you will definitely want to consider whether your time is better spent elsewhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  Base OS
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lE-U5yOz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/alpine-linux.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lE-U5yOz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://youknowfordevs.com/images/uploads/alpine-linux.svg" alt="Alpine Logo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second thing to note about the Node Docker image selection, is that we’ve used the &lt;code&gt;alpine&lt;/code&gt; version of the image which uses &lt;a href="https://alpinelinux.org/about/"&gt;Alpine Linux&lt;/a&gt; as the base operating system. This is the lightest of the Node images, and only provides the bare minimum of an operating system to get Node running–we’re most likely creating microservices, after all.&lt;/p&gt;

&lt;p&gt;You’ve probably come across the &lt;code&gt;alpine&lt;/code&gt; project but if you haven’t, take a look; it’s being used right across the Docker ecosystem to keep Docker images light.&lt;/p&gt;

&lt;p&gt;It should be said as well that ‘light’ doesn’t just mean small for the sake of size–that’s all good of course, since it reduces the amount of data flying around your network. But in the case of a deployed service ‘light’ also means reducing the number of moving parts that can go wrong. If you start with something big like a Ubuntu base image you’re bringing in a bunch of unnecessary code and so increasing the possibility of something going wrong that wasn’t important in the first place. Imagine some nefarious outsider taking advantage of a security hole in Ubuntu, in a service that you didn’t even need!&lt;/p&gt;

&lt;p&gt;(You may have come across the expression ‘reducing attack surface’; this is &lt;em&gt;exactly&lt;/em&gt; what is being referred to.)&lt;/p&gt;

&lt;p&gt;So keep it small, tight, and controlled…and most of all, &lt;em&gt;secure&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building Your Own Base Images - Don’t!
&lt;/h3&gt;

&lt;p&gt;And it should probably go without saying that you don’t want to be building your own base images. The various Docker Node images, for example, are maintained by the Node project itself, so if anyone is going to know how to build a secure, fast and reliable image it’s them. What’s more, if anything does go wrong there is a whole community of people using the image and reporting issues; you’ll invariably find a solution very quickly.&lt;/p&gt;

&lt;h1&gt;
  
  
  A Development Environment
&lt;/h1&gt;

&lt;p&gt;So we have chosen a Node image, and we have it running from the command line. Let’s press on with our development environment.&lt;/p&gt;

&lt;p&gt;In order to be able to update files in our project directory we need to give our Node application ‘access’ to that directory. This is achieved with the ‘volume’ option on the Docker command. Try this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:/usr/src/app node:10.5.0-alpine &lt;span class="se"&gt;\&lt;/span&gt;
  /bin/sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"touch /usr/src/app/README.md"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create a directory &lt;em&gt;inside&lt;/em&gt; your Docker container (at &lt;code&gt;/usr/src/app&lt;/code&gt;), and make it refer to your current working directory &lt;em&gt;outside&lt;/em&gt; your container (the &lt;code&gt;${PWD}&lt;/code&gt; part);&lt;/li&gt;
&lt;li&gt;launch the Bash shell (rather than Node), to run the &lt;code&gt;touch&lt;/code&gt; command which will create a &lt;code&gt;README&lt;/code&gt; file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The command should exit cleanly. Check your current directory to ensure that the file has been created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-al&lt;/span&gt;
total 0
drwxr-xr-x 4 markbirbeck staff 136 1 Jul 13:26 &lt;span class="nb"&gt;.&lt;/span&gt;
drwxr-xr-x 10 markbirbeck staff 340 1 Jul 11:47 ..
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 markbirbeck staff 0 1 Jul 12:58 README.md
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is a laborious way to create a file, but we just wanted to check that our Docker container was able to ‘see’ our laptop project directory and that it could update files within it.&lt;/p&gt;

&lt;p&gt;We now have &lt;em&gt;two&lt;/em&gt; ways that we can work on our project: we can either fire up &lt;code&gt;vi&lt;/code&gt; from &lt;em&gt;inside&lt;/em&gt; the container and make edits which will immediately be mirrored to our working directory on our laptop; or we can use our familiar laptop tools–like Visual Studio Code, Sublime Text, and so on–to create and edit files &lt;em&gt;outside&lt;/em&gt; the container, knowing that changes will be immediately mirrored to the &lt;code&gt;/usr/src/app&lt;/code&gt; directory within the container.&lt;/p&gt;

&lt;p&gt;Either way, we can now develop in pretty much the same way as we normally would on our laptop, but with an easy to manage Node environment, courtesy of Docker.&lt;/p&gt;

&lt;h1&gt;
  
  
  Opening Ports
&lt;/h1&gt;

&lt;p&gt;One last thing. Let’s say we got started with Node by following &lt;a href="https://nodejs.org/en/docs/guides/getting-started-guide/"&gt;the little intro on the Node site&lt;/a&gt;. You’ll see that it sets up a ‘hello world’ web server and suggests that the page can be viewed at &lt;code&gt;http://localhost:3000&lt;/code&gt;. Go ahead and create that &lt;code&gt;app.js&lt;/code&gt; file in your current directory…but there’s no point in running it since as things stand with our &lt;em&gt;Docker&lt;/em&gt; development environment approach, this server won’t work.&lt;/p&gt;

&lt;p&gt;However, just as we saw earlier that we can map directories between the host and the container we can also map ports. The first step is to add the &lt;code&gt;-p&lt;/code&gt; option to our command like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:/usr/src/app &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000 node:10.5.0-alpine &lt;span class="se"&gt;\&lt;/span&gt;
  /bin/sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We can now access port 3000 &lt;em&gt;inside&lt;/em&gt; the container by making requests to port 3000 on our host machine, which satisfies the &lt;code&gt;http://localhost:3000&lt;/code&gt; part of the Node tutorial.&lt;/p&gt;

&lt;p&gt;But there is one last minor tweak we’ll need to make; when the server launches it will listen on the IP address &lt;code&gt;127.0.0.1&lt;/code&gt; which would be fine on our laptop, but is no good inside a container. We might use this address to prevent our server being reached from outside of our laptop, but in the case of a Docker container there is a network connection from our laptop to the container (think of them as separate machines), so keeping things ‘private’ &lt;em&gt;inside&lt;/em&gt; the container will just mean that nothing is reachable.&lt;/p&gt;

&lt;p&gt;All we need to do is change the file that was provided on the Node site, and modify the &lt;code&gt;hostname&lt;/code&gt; variable from &lt;code&gt;127.0.0.1&lt;/code&gt; to &lt;code&gt;0.0.0.0&lt;/code&gt;. This will tell the server to listen to &lt;em&gt;all&lt;/em&gt; IP addresses within the container, not just &lt;code&gt;localhost&lt;/code&gt;. We can still ensure that our server is not reachable from outside of our laptop if we want, by modifying the Docker command to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:/usr/src/app &lt;span class="nt"&gt;-p&lt;/span&gt; 127.0.0.1:3000:3000 &lt;span class="se"&gt;\&lt;/span&gt;
  node:10.5.0-alpine /bin/sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I.e., the mapping from host port to container port should only take place on &lt;code&gt;127.0.0.1&lt;/code&gt; rather than on &lt;code&gt;0.0.0.0&lt;/code&gt; (which is the default for a port mapping).&lt;/p&gt;

&lt;p&gt;Whether you modify the port setting when you run the command or not, once the &lt;code&gt;app.js&lt;/code&gt; file has this minor change then the server can be launched from inside the container. Change directory to where the &lt;code&gt;app.js&lt;/code&gt; file is, and then launch it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /usr/src/app
node app.js
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now you should be able to reach the ‘hello world’ page from the host machine by visiting &lt;code&gt;http://localhost:3000&lt;/code&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Next Steps
&lt;/h1&gt;

&lt;p&gt;Assuming all is well, we can now carry on with whatever project or tutorial we were following. Anywhere that the tutorial tells us to run something from the command-line we make sure to do it from &lt;em&gt;inside&lt;/em&gt; the container by firing up the Bash shell. If the project requires that we expose a different port then just change the &lt;code&gt;-p&lt;/code&gt; option (or add more mappings if necessary).&lt;/p&gt;

&lt;p&gt;There are a lot more ways that we can improve our develoment environment; we can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="///2018/07/02/your-development-workflow-just-got-better-with-docker-compose.html"&gt;bring in Docker Compose to shorten our command lines&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;add more directory mappings so that modules installed with &lt;code&gt;npm install&lt;/code&gt; stay &lt;em&gt;inside&lt;/em&gt; our container;&lt;/li&gt;
&lt;li&gt;create test containers that include runners like Mocha or TAP;&lt;/li&gt;
&lt;li&gt;launch local Nginx servers that will mirror our live deployments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But all of these will build on the basic setup we have here. We’ll drill into these techniques in future posts.&lt;/p&gt;

&lt;p&gt;Good luck with your projects!&lt;/p&gt;

</description>
      <category>node</category>
      <category>docker</category>
    </item>
    <item>
      <title>Using Docker and Repo Linter to Capture Project Structure Rules</title>
      <dc:creator>You Know, For Devs</dc:creator>
      <pubDate>Mon, 21 May 2018 16:44:01 +0000</pubDate>
      <link>https://dev.to/youknowfordevs/using-docker-and-repo-linter-to-capture-project-structure-rules-3bc3</link>
      <guid>https://dev.to/youknowfordevs/using-docker-and-repo-linter-to-capture-project-structure-rules-3bc3</guid>
      <description>&lt;p&gt;One of the things that’s frustrating about having lots of open source projects is, well…having lots of open source projects. Some issues are obvious, such as how to keep projects moving along, responding to pull requests, fixing bugs and adding features. But there are plenty of less obvious things to deal with, such as how to keep a whole bunch of projects aligned once you’ve decided on some best practices.&lt;/p&gt;

&lt;h1&gt;
  
  
  StandardJS
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://standardjs.com"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5B0BitzJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.rawgit.com/feross/standard/master/sticker.svg" alt="Standard JavaScript"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s say we’ve come across &lt;a href="https://standardjs.com/"&gt;StandardJS&lt;/a&gt; (which we have) and we want to apply it to all of our projects (which we do); how do we go about this? Obviously we have to locally clone every project, run StandardJS to reformat the files, run tests on the results, commit the changes, and submit a pull request. And then when we realise that we forgot to add the StandardJS badge to each and every &lt;code&gt;README&lt;/code&gt; file, we do the whole dance again.&lt;/p&gt;

&lt;p&gt;Even if we put to one side (at least for now) the difficulty of making lots of changes to lots of projects, how do we ensure that any new project we create follows whatever new conventions we’ve adopted?&lt;/p&gt;

&lt;h1&gt;
  
  
  Occam’s Razor
&lt;/h1&gt;

&lt;p&gt;Take something really simple like an &lt;code&gt;AUTHORS&lt;/code&gt; file. In a Node project we can either put the &lt;a href="https://docs.npmjs.com/files/package.json#people-fields-author-contributors"&gt;names of contributors into the &lt;code&gt;package.json&lt;/code&gt; file&lt;/a&gt;, or NPM will pick up the names of contributors from an &lt;code&gt;AUTHORS&lt;/code&gt; file if one is present. Which approach is better? Well in my opinion the latter is better because the &lt;code&gt;AUTHORS&lt;/code&gt; file is a convention that you’ll find in many different types of open source projects, not just Node. My personal rule is that if there is a language-&lt;em&gt;independent&lt;/em&gt; convention that can be followed at no cost then I will always follow it, rather than using a language-specific equivalent. (Think &lt;a href="https://simple.wikipedia.org/wiki/Occam%27s_razor"&gt;Occam’s Razor&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://commons.wikimedia.org/wiki/File:William_of_Ockham.png#/media/File:William_of_Ockham.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Kruukern--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://upload.wikimedia.org/wikipedia/commons/7/70/William_of_Ockham.png" alt="Stained-glass window showing William of Ockham"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
By self-created (Moscarlop) - Own work, &lt;a href="https://creativecommons.org/licenses/by-sa/3.0"&gt;CC BY-SA 3.0&lt;/a&gt;, &lt;a href="https://commons.wikimedia.org/w/index.php?curid=5523066"&gt;Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But regardless of which way we come down on tracking contributors, the point is that we’ve thought about a problem, we’ve made a decision, and the result is a simple rule that we can easily follow in all of our projects (regardless of language)–in my case it’s &lt;em&gt;always have an &lt;code&gt;AUTHORS&lt;/code&gt; file&lt;/em&gt;. If we can just remember this rule then we can avoid having to think about it each time we start a new project; after all, there’s nothing worse than the Groundhog Day moment(s) of going through exactly the same decision-making process all over again, just to arrive at the same conclusion.&lt;/p&gt;

&lt;p&gt;All of which means we’ve moved the problem along a little; whereas before we were having to decide every time we started a new Node project whether to put contributor names in the &lt;code&gt;package.json&lt;/code&gt; file or an &lt;code&gt;AUTHORS&lt;/code&gt; file, now the problem is &lt;em&gt;how do we remember what we decided last time?&lt;/em&gt; At least this problem has the advantage of being the same no matter what issue we are trying to solve…Which license should I use? Which version of Node should I put in my &lt;code&gt;Dockerfile&lt;/code&gt;? And so on.&lt;/p&gt;

&lt;p&gt;I’ve considered various ways to capture these kinds of rules, once they’ve been decided on. They could be written in a notebook and kept in the top drawer of your desk. But one criteria I’d like is to make the rules shareable and discussable. Which means that even if the only person discussing the rules was myself &lt;em&gt;with&lt;/em&gt; myself, it would be handy to have a space where the &lt;a href="https://standardjs.com/#i-disagree-with-rule-x-can-you-change-it"&gt;advantages and disadvantages of using, say, StandardJS instead of ESLint directly&lt;/a&gt;, could be weighed up. (Not to mention whether or not to use semicolons.) So whilst some kind of shared document would be great, a Git repo would be even better.&lt;/p&gt;
&lt;h1&gt;
  
  
  Specifying Rules
&lt;/h1&gt;

&lt;p&gt;Assuming this knowledge is captured in a repository with all the benefits of issue-tracking and comments, what form should the rules actually take?&lt;/p&gt;

&lt;p&gt;They could just be written in prose, such as &lt;em&gt;always use an &lt;code&gt;AUTHORS&lt;/code&gt; file&lt;/em&gt;, and that would certainly be a great start. But even better would be to find a way to express the rules in such a way that they could be enforced by software. And it’s at this point in my research and Googling that I came across &lt;a href="https://github.com/todogroup/repolinter"&gt;Repo Linter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y1Vl13n8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/todogroup/repolinter/raw/master/docs/images/P_RepoLinter01_logo_only.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y1Vl13n8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/todogroup/repolinter/raw/master/docs/images/P_RepoLinter01_logo_only.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Repo Linter
&lt;/h2&gt;

&lt;p&gt;Repo Linter contains a set of generic tests, such as &lt;em&gt;check that a certain file exists&lt;/em&gt; or &lt;em&gt;check that a set of files contain the specified text&lt;/em&gt;. These tests can then be used to create more specific &lt;em&gt;rules&lt;/em&gt; such as &lt;em&gt;does a README.md file exist?&lt;/em&gt; or &lt;em&gt;do all of the source files have a copyright message at the top?&lt;/em&gt; Rules are then further combined into &lt;em&gt;rulesets&lt;/em&gt; to create the exact collection of policies that you want to enforce for your projects.&lt;/p&gt;

&lt;p&gt;To illustrate, a ruleset that contains two rules–one to check that there is a license and another to check that there is a &lt;code&gt;README&lt;/code&gt;–might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"rules"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"all"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"license-file-exists:file-existence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"files"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"LICENSE*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"COPYING*"&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"readme-file-exists:file-existence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"files"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"README*"&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is pretty powerful, but Repo Linter goes further and allows us to specify rules that are specific to certain languages (the rules above will apply to all languages). For example, to ensure that all of our Node projects include a &lt;code&gt;package.json&lt;/code&gt; file we’d add this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"rules"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"all"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"language=javascript"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"package-metadata-exists:file-existence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"files"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"package.json"&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So now we have a way to express our rules in a simple JSON file. All that’s left is to apply them against a repo.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Answer is Docker (Now…What was the Question?)
&lt;/h1&gt;

&lt;p&gt;The easiest way to get these rules into a reusable form is to create a Docker image that bundles both Repo Linter and a configuration file for some set of specific rules. I’ve created a basic Docker image that installs Repo Linter and its dependencies, and adds some simple rules to check for a license, &lt;code&gt;AUTHORS&lt;/code&gt; and &lt;code&gt;README&lt;/code&gt; files. There are also some instructions to help create new images based on this base, so that more specific rules can be added. The repo is &lt;a href="https://github.com/markbirbeck/docker-repolinter"&gt;markbirbeck/docker-repolinter&lt;/a&gt; and the Docker image it generates is in Docker Hub at &lt;a href="https://hub.docker.com/r/markbirbeck/docker-repolinter/"&gt;markbirbeck/docker-repolinter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can also skip this and go straight to the repo I’ve set up to do exactly what I’ve described in this post–capture rules and have a place where they can be discussed. That repo is at &lt;a href="https://github.com/markbirbeck/my-repolinter"&gt;markbirbeck/my-repolinter&lt;/a&gt; and it should be very straightforward to copy it and create a Docker image that captures your own policies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Where Next?
&lt;/h1&gt;

&lt;p&gt;In a future post I’ll look at how we might go about &lt;em&gt;generating&lt;/em&gt; any files that we detect are missing.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>repolinter</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Disposable Laptops With Docker Compose And NPM</title>
      <dc:creator>You Know, For Devs</dc:creator>
      <pubDate>Sun, 23 Jul 2017 15:11:01 +0000</pubDate>
      <link>https://dev.to/youknowfordevs/disposable-laptops-with-docker-compose-and-npm-4j97</link>
      <guid>https://dev.to/youknowfordevs/disposable-laptops-with-docker-compose-and-npm-4j97</guid>
      <description>&lt;p&gt;Switching laptops a lot more frequently than I would want has led over the years to my trying a variety of ways to keep track of the applications that I need to install to get up and running quickly. In this post I’ll look at where I’ve got to with this.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;[If you want to look at the results of this exploration before you look at the ‘why’ then head over to &lt;a href="https://github.com/markbirbeck/docker-compose-run"&gt;docker-compose-run&lt;/a&gt; to see how to run cross-platform apps. Also, look at &lt;a href="https://github.com/markbirbeck/mydesktop"&gt;https://github.com/markbirbeck/mydesktop&lt;/a&gt; for an example of how to quickly install a collection of these cross-platform apps on any device.]&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Just when you think you’ve got your laptop set up just how you like it, something happens that means you have to start again. Perhaps you upgraded your hard-drive to an SSD and had to reinstall everything. Or maybe your fan broke and you had to switch to another laptop whilst it was repaired. (Only to have the wireless network card break on the stand-in, requiring &lt;em&gt;another&lt;/em&gt; swtich.)&lt;/p&gt;

&lt;p&gt;Or maybe the laptop didn’t break at all, and you left it on an underground train after a cookery class–no doubt distracted by having too many bags of food to carry. The upside of this particular event is that losing an entire computer means you finally have a perfect excuse to buy Linux rather than Mac, as you’ve long been telling yourself you would.&lt;/p&gt;

&lt;p&gt;All of these changes has led over the years to a variety of ways to document and track how a machine should get set up. My current interest is laptops but in the past I needed to address the same challenge with servers. From &lt;a href="http://markbirbeck.com/2008/02/27/sugarcrm-rightscale-and-ec2/"&gt;simple shell scripts in RightScale&lt;/a&gt; back in 2008, through &lt;a href="http://markbirbeck.com/2012/03/16/using-knife-to-launch-ec2-instances-without-a-chef-server/"&gt;Chef (and VirtualBox)&lt;/a&gt; in 2012, and finally to Docker today, I’ve used a myriad of tools to get servers deployed and running as quickly as possible (and to record exactly how I did it).&lt;/p&gt;

&lt;p&gt;The general logic with servers has been to start with a clean slate, run some scripts to configure things how you’d like, and then let it go. Crucial to the approach is to not assume that the server will last forever, which means nothing gets placed on the machine that can’t be ‘recreated’ through the install scripts.&lt;/p&gt;

&lt;p&gt;But for laptops it hasn’t been so straightforward. For a start, there isn’t an easy concept of ‘run these scripts to get a standard laptop’ because laptops tend to change a lot during normal use. What I’ve usually done–and I’ve seen many others do the same–is create a script that installs all the software I like &lt;em&gt;on a new machine&lt;/em&gt; and then try to keep that script up-to-date ready for the fateful day when it will be used again.&lt;/p&gt;

&lt;p&gt;One obvious problem with this is that you have to remember to update the script when you install new software…and then update it again when you remove software. Another issue is that the only time you ever run this script is when you have a brand new laptop, so you can’t be sure that it will always work (or conversely, the only time it will fail is when it’s crucial that it doesn’t). And yet another problem is that many applications don’t have easy installations, so your scripts end up not being a collection of actions but a bunch of comments that remind you of where to download a zip file from, or which special incantation needs to be run to apply a patch to a file to fix a buggy release.&lt;/p&gt;

&lt;p&gt;But perhaps the most annoying issue is that my &lt;a href="https://gist.github.com/markbirbeck/2fb12460eb69484a87b3"&gt;‘Mac From Scratch’ script&lt;/a&gt; ends up being very different to my &lt;a href="https://gist.github.com/markbirbeck/fef06d10cc90e7000fad39beb7216701"&gt;‘Ubuntu From Scratch’ script&lt;/a&gt;–partly because there is different software on the different platforms, but also because, even when both platforms use the same software they each have different package managers.&lt;/p&gt;

&lt;p&gt;Why would I need two sets of scripts? Well at the moment I have three old Mac’s lying around plus the Ubuntu laptop I’m writing this post on, and since I can’t afford downtime, in an emergency I’d like to be able to switch any of those machines into service as quickly and easily as possible.&lt;/p&gt;

&lt;p&gt;So lately–and with this goal in mind–I’ve been taking a different approach, made possible by first Docker, and then Docker Compose…all bound together with NPM.&lt;/p&gt;

&lt;h1&gt;
  
  
  Dockerising The Desktop
&lt;/h1&gt;

&lt;p&gt;The Docker integration on the Mac (and Windows, for that matter) has come on in leaps and bounds in the last year or so, so an obvious question now is why not run our desktop apps through Docker? &lt;sup id="fnref:2"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This has a number of advantages. The most obvious is that suddenly you’re running exactly the same apps on &lt;em&gt;any&lt;/em&gt; of your desktops. For example, you might use &lt;a href="https://www.bitlbee.org/main.php/news.r.html"&gt;bitlbee&lt;/a&gt; to interact with all of your online chat services. Get this set up in Docker and you can run the exact same configuration on any machine you own, regardless of whether it’s Linux, OS X or Windows. More than that, as you tweak and poke and prod your configuration you can keep track of the changes in a Dockerfile in GitHub–or whatever version control system you prefer–rather than adding and removing comments in a Gist.&lt;/p&gt;

&lt;p&gt;If you need more advantages, perhaps my favourite is that &lt;em&gt;containerising your desktop&lt;/em&gt; keeps down the clutter that can happen when installing lots of dependencies. For example: I’ve never really liked Ruby, I’m afraid &lt;sup id="fnref:1"&gt;2&lt;/sup&gt;, so I’d really rather not have loads of gems cluttering up my environment. Of course, if I’m running an application that is written in Ruby then I’ll need those gems, but if I move away from the app to use another I don’t want the app (and Ruby) to put up a fight when I come to uninstall it. By keeping all of the gems inside a container they’re all gone in one fell swoop.&lt;/p&gt;

&lt;p&gt;And the icing on this already delicious cake is that if you decide to move an app to the cloud–and something like &lt;em&gt;bitlbee&lt;/em&gt; is a prime candidate for that–then now that you have a &lt;code&gt;Dockerfile&lt;/code&gt; it’s just a question of deployment.&lt;/p&gt;

&lt;h1&gt;
  
  
  Let’s Jekyll
&lt;/h1&gt;

&lt;p&gt;To illustrate all of these ideas, let’s get a blog set up using Jekyll.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker
&lt;/h2&gt;

&lt;p&gt;It so happens that the Jekyll development team have created a number of Docker images that contain everything we need to get going. The images are hosted on Dockerhub at &lt;a href="https://hub.docker.com/r/jekyll/jekyll/"&gt;jekyll/jekyll&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The general pattern with apps that are running in a Docker container is to provide a load of preamble and then the specific command that we want to run. The ‘specific command’ part that gets Jekyll to take the files in the current directory and build our blog looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;jekyll build
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With Docker we add the preamble at the front, so it will look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;preamble] jekyll build
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The preamble will get Docker to launch the correct container before running the command that we add to the end–i.e., the &lt;code&gt;jekyll build&lt;/code&gt; part– &lt;em&gt;inside&lt;/em&gt; the container. We’ll look later at how we can refine this preamble, but for now let’s spell it out.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/jekyll/docker/wiki/Usage:-Running"&gt;Jekyll Docker image usage wiki page&lt;/a&gt; advises the following to run a command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;jekyll &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--volume&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:/srv/jekyll &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 127.0.0.1:4000:4000 &lt;span class="se"&gt;\&lt;/span&gt;
  jekyll/jekyll &lt;span class="se"&gt;\&lt;/span&gt;
  jekyll serve
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This pattern is pretty much the same as we’ll use for all of our commands so it’s worth breaking apart:&lt;/p&gt;

&lt;h3&gt;
  
  
  Removing Containers On Close (&lt;code&gt;--rm&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;We tell Docker to remove the container after it has finished running because there is nothing ‘inside’ the container that we want to keep after it has exited, and we don’t want to have to periodically delete these finished containers to save space.&lt;/p&gt;

&lt;h3&gt;
  
  
  Labelling The Container (&lt;code&gt;--label&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;We give the container a label of &lt;code&gt;jekyll&lt;/code&gt; to make it easier to keep track of.&lt;/p&gt;

&lt;h3&gt;
  
  
  Providing Access To Our Local Files (&lt;code&gt;--volume&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;We need to give the container access to the blog files that we have in the current directory. We don’t want to put the files &lt;em&gt;inside&lt;/em&gt; the container, because then we’d have to work out how to share the container with our other computers as well as having to deal with its lifecycle. It’s much easier to treat the Docker container itself as disposable and then worry about giving it access to our data which we’ll store elsewhere. To share the blog posts on our different computers we can easily mirror our simple markdown files with Dropbox, Google Drive, or whatever.&lt;/p&gt;

&lt;p&gt;The pattern we’ll see as we use Docker for more and more local apps is that some directory on our local machine–in this case the current working directory, obtained with &lt;code&gt;$(pwd)&lt;/code&gt;–is made available &lt;em&gt;inside&lt;/em&gt; the container, as if it was the directory &lt;code&gt;/srv/jekyll&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting To Our Console (&lt;code&gt;-it&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;We also need to tell Docker that we want to interact with the session whilst it’s running, which we can do using the &lt;code&gt;-i&lt;/code&gt; and &lt;code&gt;-t&lt;/code&gt; options. With Jekyll we’re mainly interested in any output messages, but there will be other apps–like email and chat clients–where we’ll actually interact fully with the running container.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mapping The Ports (&lt;code&gt;-p&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;To get access to the running web server from a web browser on our local machine we need to give the &lt;code&gt;-p&lt;/code&gt; option. This maps a port &lt;em&gt;inside&lt;/em&gt; the container, to a port on our local machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the Command
&lt;/h2&gt;

&lt;p&gt;So what happens when we run this command?&lt;/p&gt;

&lt;p&gt;Well first, if you haven’t run it before, the Docker image &lt;code&gt;jekyll/jekyll&lt;/code&gt; will be downloaded by Docker, and cached. Once the image has been downloaded it will be used to launch a container with all of the voumes and port mappings that we’ve given. And once all this is set up, our command (in this case &lt;code&gt;jekyll serve&lt;/code&gt;) will be run. We’ll see Jekyll’s output messages in our console and then we can navigate to &lt;code&gt;http://localhost:4000/&lt;/code&gt; in a web browser.&lt;/p&gt;

&lt;p&gt;The command we’ve just run is a bit of a mouthful so it might be tempting to create a shell script or alias to keep it shorter. However, there is another technique which is part of the Docker ecosystem, and gives us a number of advantages.&lt;/p&gt;

&lt;p&gt;We’ll look at the benefits in a moment, but for now let’s look at how the approach works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Compose
&lt;/h2&gt;

&lt;p&gt;The Docker ecosystem comes with a powerful tool called Docker Compose that you can think of as sitting ‘above’ Docker itself, since it’s capable of launching multiple Docker containers as a unit (hence ‘compose’). Docker Compose provides a convenient way to specify all of the different aspects of the various containers that you might want to run–volumes, network settings, which other containers to depend on, and so on–which makes it ideal, even when just launching one container.&lt;/p&gt;

&lt;p&gt;So let’s see what a Docker Compose file that provides the same Jekyll ‘preamble’ might look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;version: &lt;span class="s2"&gt;"2"&lt;/span&gt;
services:
  jekyll:
    image: jekyll/jekyll
    ports:
    - &lt;span class="s2"&gt;"4000:4000"&lt;/span&gt;
    volumes:
    - &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:/srv/jekyll
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Docker Compose will default to wiring in a terminal for us so we don’t need to specify the &lt;code&gt;-i&lt;/code&gt; and &lt;code&gt;-t&lt;/code&gt; options we saw before. However, for some reason the default behaviour with &lt;code&gt;docker-compose run&lt;/code&gt; is &lt;em&gt;not&lt;/em&gt; to automatically delete any finished containers, or to map the exposed ports (both of which &lt;em&gt;do&lt;/em&gt; happen when using &lt;code&gt;docker-compose up&lt;/code&gt;), so we’ll need to add the &lt;code&gt;--service-ports&lt;/code&gt; and &lt;code&gt;--rm&lt;/code&gt; options.&lt;/p&gt;

&lt;p&gt;If you create a file called &lt;code&gt;docker-compose.yml&lt;/code&gt; in your blog directory with the YAML that we have above, you can then get the effect of the longer Docker command by doing this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--service-ports&lt;/span&gt; jekyll jekyll serve
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In effect we’ve replaced the preamble of the long &lt;code&gt;docker run ...&lt;/code&gt; command with &lt;code&gt;docker-compose run --rm --service-ports jekyll&lt;/code&gt;, i.e., we’ve told Docker Compose to launch a Docker container using the options found under the key ‘jekyll’ in the &lt;code&gt;docker-compose.yml&lt;/code&gt; file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits
&lt;/h2&gt;

&lt;p&gt;We’ve seen how it works, so now what are the benefits?&lt;/p&gt;

&lt;p&gt;An obvious one is that we’ve reduced the size of the preamble to something a little more manageable; we’ve taken this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;jekyll &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--volume&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:/srv/jekyll &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 127.0.0.1:4000:4000 &lt;span class="se"&gt;\&lt;/span&gt;
  jekyll/jekyll &lt;span class="se"&gt;\&lt;/span&gt;
  ...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;down to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--service-ports&lt;/span&gt; jekyll ...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;But as we said above, we could have done that with an alias or shell script. The big advantage here is that we’ve captured the detail of the command–the name of the image to download and launch, the volume mapping to use, the ports to expose, and so on–in a platform-independent language that can be checked for validity and easily shared on different machines.&lt;/p&gt;

&lt;p&gt;In fact, in the case of this Jekyll example the details of the blogging software you are using could actually be saved as part of your blog; you could save the &lt;code&gt;docker-compose.yml&lt;/code&gt; file in the same repo as the markdown for the blog, and be up and running on any new machine within minutes, simply by running the &lt;code&gt;docker-compose&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;This is not to be taken lightly. It’s as if we’ve installed the blogging software in the same directory as the blog itself, and committed it to source control; we’ve managed to get all the benefits of linking an app directly to the subdirectory that it acts upon, but with none of the disadvantages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Standard Preamble
&lt;/h2&gt;

&lt;p&gt;One last benefit is that by taking a lot of the parameters out into a configuration file we’ve effectively made our preamble consistent across many apps. This lends itself nicely to a simple alias or shell script that abbreviates things a little further; instead of running this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--service-ports&lt;/span&gt; jekyll jekyll serve
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;we could simply do something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;dcr jekyll serve
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;A simple alias to achieve this could be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;function &lt;/span&gt;dcr &lt;span class="o"&gt;{&lt;/span&gt;
  docker-compose run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--service-ports&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So instead of having one alias for each of our commands, we have a single alias that encapsulates the preamble and can be used for all commands.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using A Global &lt;code&gt;docker-compose.yml&lt;/code&gt; File
&lt;/h2&gt;

&lt;p&gt;Having a local &lt;code&gt;docker-compose.yml&lt;/code&gt; file in our blog is great, and with this shell function we can certainly get up and running pretty quickly. But what about our other applications? We were talking about how to get our entire laptop up-to-speed as quickly as possible and this doesn’t yet do that.&lt;/p&gt;

&lt;p&gt;One way we could take this further would be to have a single central Docker Compose file that contains instructions for all of our applications. For example, we might have entries for Jekyll and Mutt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;version: &lt;span class="s2"&gt;"2"&lt;/span&gt;
services:
  mutt:
    image: fstab/mutt
    volumes:
    - ~/.mutt:/home/mutt

  jekyll:
    image: jekyll/jekyll
    ports:
    - 4000:4000
    volumes:
    - &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:/srv/jekyll
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With a file like this it would then just be a question of placing it somewhere convenient and ensuring that it’s mirrored to our Git repo, before modifying the alias so that it finds this file. The &lt;code&gt;-f&lt;/code&gt; option does this, like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;function &lt;/span&gt;dcr &lt;span class="o"&gt;{&lt;/span&gt;
  docker-compose &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--service-ports&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; ~/.dcr/docker-compose.yml run &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is a little cumbersome to maintain, and has the disadvantage of being tied to a particular operating system–although it would no doubt be pretty simple to convert.&lt;/p&gt;

&lt;p&gt;But maybe we can go one better, by using the Node package manager tools to manage our apps?&lt;/p&gt;

&lt;h2&gt;
  
  
  Using NPM Modules
&lt;/h2&gt;

&lt;p&gt;NPM gives us a couple of big benefits that can take our Docker Compose approach to another level. The first is that by treating our apps as a module we can easily keep track of the corresponding &lt;code&gt;docker-compose.yml&lt;/code&gt; files.&lt;/p&gt;

&lt;p&gt;The second advantage is that NPM has the ability to create symbolic links to applications when a module is installed; we can now get the same functionality that we’d get from aliases and functions, but in a way that will work across operating systems.&lt;/p&gt;

&lt;p&gt;So to install our Jekyll Docker shortcut we might simply have to do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; dcr-jekyll
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We now have a cross-platform way to manage installations of our cross-platform applications!&lt;/p&gt;

&lt;p&gt;I’ve implemented this NPM approach at &lt;a href="https://github.com/markbirbeck/docker-compose-run"&gt;docker-compose-run&lt;/a&gt; on GitHub. This module provides the equivalent of the alias to run Docker Compose with the right parameters. It’s then used to create application shortcuts such as &lt;a href="https://github.com/markbirbeck/dcr-jekyll"&gt;dcr-jekyll&lt;/a&gt; and &lt;a href="https://github.com/markbirbeck/dcr-mutt"&gt;dcr-mutt&lt;/a&gt;. For a list of applications available see &lt;a href="https://github.com/markbirbeck/docker-compose-run/wiki"&gt;the wiki page&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reproducible Laptop
&lt;/h2&gt;

&lt;p&gt;So now we’re able to run our apps, and we’re also able to use the same app across multiple platforms; the final part of the jigsaw is to keep track of the apps that we want to use on any of our devices. And here there is an interesting twist.&lt;/p&gt;

&lt;p&gt;If we run any of our commands, such as &lt;code&gt;dcr-jekyll serve&lt;/code&gt;, and the required Docker image is not in the cache, then Docker will download it.&lt;/p&gt;

&lt;p&gt;But if we &lt;em&gt;never&lt;/em&gt; run the command, it never gets downloaded.&lt;/p&gt;

&lt;p&gt;Which means that we aren’t actually ‘installing’ all the commands we want on our brand new laptop; all we need to do is to ensure that our brand new laptop has Docker on it, as well as all of the &lt;code&gt;docker-compose.yml&lt;/code&gt; files that drive our applications, and then when we run a command Docker will do what it’s good at and download the correct images. All of this means that we get our new laptop up to speed even faster, because we don’t install everything in one go.&lt;/p&gt;

&lt;h3&gt;
  
  
  A ‘mydesktop’ Repo
&lt;/h3&gt;

&lt;p&gt;Since everything we need is now in the &lt;code&gt;dcr-*&lt;/code&gt; style repos, then all we now need is a single package that brings the whole lot together…and that’s super simple; just create a package listing whatever &lt;code&gt;dcr-*&lt;/code&gt; applications you need, and add &lt;code&gt;bin&lt;/code&gt; definitions for the application name by which you want to run the command.&lt;/p&gt;

&lt;p&gt;For example, if we want to have the &lt;code&gt;dcr-jekyll&lt;/code&gt; and &lt;code&gt;dcr-mutt&lt;/code&gt; applications available, but we want to run them by simply typing &lt;code&gt;jekyll&lt;/code&gt; or &lt;code&gt;mutt&lt;/code&gt;–rather than &lt;code&gt;dcr-jekyll&lt;/code&gt; and &lt;code&gt;dcr-mutt&lt;/code&gt;–then we’d have a &lt;code&gt;package.json&lt;/code&gt; file like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@markbirbeck/mydesktop"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0.2.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"My desktop applications, saved as an NPM package."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"bin"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"jekyll"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"./node_modules/.bin/dcr-jekyll"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mutt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"./node_modules/.bin/dcr-mutt"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"dependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dcr-jekyll"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^0.3.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dcr-mutt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^0.2.1"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In short, whatever you put in the &lt;code&gt;bin&lt;/code&gt; key is the application name you’ll use.&lt;/p&gt;

&lt;p&gt;Now all that’s left to do is push this to GitHub–here’s &lt;a href="https://github.com/markbirbeck/mydesktop"&gt;my desktop&lt;/a&gt; if you want to take a look–and then you can install your &lt;em&gt;entire&lt;/em&gt; saved desktop with this simple command (or your version of it):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; markbirbeck/mydesktop
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We’re making use of the fact that NPM will install from GitHub by default when presented with an &lt;code&gt;a/b&lt;/code&gt; style module name, which means we won’t clutter NPM with our own personal modules.&lt;/p&gt;

&lt;p&gt;There’s nothing to stop you taking this approach further and creating separate app collections for development tools, communications tools, games, tools you use for particular clients (if you’re a freelancer), and so on. It would then just be a case of installing the correct combination for work and home machines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It might feel like we’ve done a lot of work here, but actually we’ve simply brought together some state-of-the-art technologies to make management of our desktop machines simple, trackable and reproducible; we’ve used Docker to run the same application on different operating systems, we’ve used Docker Compose to capture the detail of how each application should run, and we’ve used Node’s package manager to ensure that we can install these valuable snippets into any environment.&lt;/p&gt;

&lt;p&gt;In a future post we’ll look at how to ensure we can track our data and configuration files across devices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Notes
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When I started working on this idea I of course Googled around to see what other people had done in this space. I didn’t find anyone trying to turn the whole setup into a load of managed packages, or using Docker Compose to capture the logic, but I did find a few people using Docker to run apps that one would ordinarily install on your laptop. Far and away the best material I read was from Jessie Frazelle; no surprise there…she knows her Docker inside out. In particular her blog post &lt;a href="https://blog.jessfraz.com/post/docker-containers-on-the-desktop/"&gt;Docker Containers on the Desktop&lt;/a&gt; is crammed full of really clever ideas, and there are also links to talks that she did on the subject. ↩&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each time I’ve worked on a Ruby project it has taken an age to get everything set up correctly, whether finding missing gems, getting compilers to work, or a myriad of other issues. I’ve usually been working with developers who’ve had their systems set up for ages and can’t quite remember how they got there. ↩&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockercompose</category>
    </item>
  </channel>
</rss>
