<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Laura</title>
    <description>The latest articles on DEV Community by Laura (@dustytrinkets).</description>
    <link>https://dev.to/dustytrinkets</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dustytrinkets"/>
    <language>en</language>
    <item>
      <title>A MongoDB data storing refactor story</title>
      <dc:creator>Laura</dc:creator>
      <pubDate>Tue, 15 Feb 2022 11:55:21 +0000</pubDate>
      <link>https://dev.to/one-beyond/a-mongodb-data-storing-refactor-story-5dmp</link>
      <guid>https://dev.to/one-beyond/a-mongodb-data-storing-refactor-story-5dmp</guid>
      <description>&lt;p&gt;Over the last few months my team and I have been working on a micro-service architecture for an e-learning platform. One of the services is in charge of translating packages (books) from a given &lt;u&gt;XML DITA&lt;/u&gt; structure into a series of content in our custom JSON format, and sending the deltas of this content through a message broker so that their current states are available on a content API, ready to be retrieved by the front-end.&lt;/p&gt;

&lt;p&gt;To start, I’ll briefly explain the structure found on the packages we digest, as well as the requirements we have.&lt;/p&gt;

&lt;h2&gt;
  
  
  The package structure
&lt;/h2&gt;

&lt;p&gt;A book (what we call a package) can contain the following contents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Maps&lt;/strong&gt;: structural information containing other maps and/or topics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Topics&lt;/strong&gt;: structural information containing one or more particles.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Particles&lt;/strong&gt;: educational pills and learning assessments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4svy2bbfh2z9zom5r08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4svy2bbfh2z9zom5r08.png" alt="JSON Package tree structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every time a content changes, we must keep track of it. For these, we need to store &lt;strong&gt;three types of deltas: creations, deletions and updates&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The requirements
&lt;/h2&gt;

&lt;p&gt;The service has to accomplish the following requirements:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1050%2F1%2AHbc-IVbAJhYq9rEW-e07vg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1050%2F1%2AHbc-IVbAJhYq9rEW-e07vg.png" alt="requirements"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1. &lt;strong&gt;Import&lt;/strong&gt;: New packages must be translated into JSON, and its deltas published.&lt;/li&gt;
&lt;li&gt;2. &lt;strong&gt;Reimporting&lt;/strong&gt;: The editors should have the possibility to &lt;strong&gt;go back to any given version of the package&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;3. &lt;strong&gt;Reindexing&lt;/strong&gt;: We should keep track of all the deltas for each one of the contents, to be able to &lt;strong&gt;repopulate the content api in the case of an inconsistency&lt;/strong&gt; between both services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that we are using a &lt;strong&gt;MongoDB instance in Azure CosmosDB&lt;/strong&gt;, which we found out that has some &lt;u&gt;limitations&lt;/u&gt; when it comes to implementing updateMany or deleteMany queries, because of the way it shards the collections.&lt;/p&gt;

&lt;p&gt;Knowing this, let’s go through the different approaches we have implemented, and what problems we have found on the way.&lt;/p&gt;

&lt;h1&gt;
  
  
  First attempt: all deltas in one content document
&lt;/h1&gt;

&lt;p&gt;Our first approach was to create one document on the database collection for each content (map, topic or particle), and include an events array of the deltas of that content.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Adding a helper field
&lt;/h2&gt;

&lt;p&gt;Due to this structure, searching for the last event for every content led to very slow queries. For this reason, we included the &lt;strong&gt;&lt;em&gt;lastImport&lt;/em&gt;&lt;/strong&gt; object on each content, containing a reference to the last event saved on the array, to fasten the queries that didn’t need the DELETED contents.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The problem we were facing with this approach, apart from the &lt;strong&gt;long storing times&lt;/strong&gt;, was that the events a*&lt;em&gt;rray was going to grow&lt;/em&gt;* every time a change was applied to the contents they were referring to, so the document could reach the &lt;u&gt;16 megabytes mongo limit&lt;/u&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Second attempt: one document per event
&lt;/h1&gt;

&lt;p&gt;We had to solve the problem with the growing events array, so we decided to switch the storing way to one document per event for each one of the contents.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This way we had fixed the document limit issue, but we still had to solve the slow queries issue when inserting and retrieving deltas.&lt;/p&gt;

&lt;h1&gt;
  
  
  Time improvements via indexing
&lt;/h1&gt;

&lt;p&gt;To fasten the process we decided to investigate the usefulness of indexing different fields of the collection. We triggered a reindex and a reimport with four collections (each having a different indexed field) and we got these results:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;code&gt;(Time for the reindex and reimport processes with collections with different indexes)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Looking at the results, we decided to include the timestamp index, as we saw a significant reduction in the time spent for the reindex, and no difference on the reimport time.&lt;/p&gt;

&lt;h1&gt;
  
  
  Third attempt: storing the translations, not the deltas
&lt;/h1&gt;

&lt;p&gt;Despite this small time improvement, we were still unsatisfied with the results. We wanted to significantly reduce the time spent on the imports, as the service was expected to be processing 50 products a day.&lt;/p&gt;

&lt;p&gt;To solve it, we fully changed the storing and processing paradigm: we are now &lt;strong&gt;translating and storing all the incoming packages as a whole&lt;/strong&gt;, and letting the service calculate the deltas and &lt;strong&gt;publish the deltas from each package on the go.&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This way, we significantly reduce the storing time, as no deltas are being stored, only the package translation. At the same time, we can still keep all the translation history to go back and restore a previous version, calculating the deltas on the go whenever we want (reimport).&lt;/p&gt;

&lt;h2&gt;
  
  
  We only store translations ¿what about the reindex?
&lt;/h2&gt;

&lt;p&gt;The only loose end at this point was the reindexing, since we would have to calculate the deltas for all of the events that occurred since the package was created.&lt;/p&gt;

&lt;p&gt;To solve this, every time a translation was published we calculated and stored a complete history of the deltas(completeDeltas field), so we could easily trigger the reindex by searching for the last publication of that package and publishing those &lt;strong&gt;completeDeltas&lt;/strong&gt;.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Mongo limits trouble again: Azure Blobs to the rescue
&lt;/h2&gt;

&lt;p&gt;While testing our new implementation with a series of real packages, we came up with an old problem: the mongo collection was reaching its 16mb limit, not only when storing the completeDeltas, but also with just the translation of some big packages.&lt;/p&gt;

&lt;p&gt;We realised we wouldn’t be able to store the translations if we kept using mongo, so we had two options: change to a relational DB in which the limits for a field is about 1Gb, and hope for a package not to ever reach that size, or change the place in which we were storing the contents and completeDeltas.&lt;/p&gt;

&lt;p&gt;We are now storing the translations on an Azure BlobStorage, and referencing that JSON translation URL on the packages translation collection, as well as referencing the original XML content path.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Also, the last completeDeltas array is stored on the blob, and we overwrite the old versions with the new ones each time we publish the package, since we only need the last version for the reindex. The blob is organised as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1050%2F1%2AVV6rSanIj5NRVxQ_Tp8y-Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1050%2F1%2AVV6rSanIj5NRVxQ_Tp8y-Q.png" alt="organization"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this new approach we are facing translations of less than a minute, and publications no longer than 5 minutes, while we can ensure that every version coming in XML is being translated and stored without overloading the process.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>javascript</category>
      <category>azure</category>
      <category>blobs</category>
    </item>
    <item>
      <title>A practical introduction to Docker with Node.js</title>
      <dc:creator>Laura</dc:creator>
      <pubDate>Fri, 04 Feb 2022 11:05:20 +0000</pubDate>
      <link>https://dev.to/one-beyond/a-practical-introduction-to-docker-with-nodejs-6no</link>
      <guid>https://dev.to/one-beyond/a-practical-introduction-to-docker-with-nodejs-6no</guid>
      <description>&lt;p&gt;We are going to get into the basics of Docker through an example in Node.js to understand its benefits. You can download the working code example from this article here.&lt;/p&gt;

&lt;p&gt;When we talk about software, it includes a whole stack of components, including frontend and backend components, databases, libraries, etc.&lt;/p&gt;

&lt;p&gt;During the deployment of our software, we have to ensure that all these components work on a wide range of platforms where our application may run.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Docker used for?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z81ybrDa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ed3oqwu87l8g1rlpck29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z81ybrDa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ed3oqwu87l8g1rlpck29.png" alt=":)" width="375" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’m guessing you have faced the situation in which you test your application locally, and when deploying it, something doesn’t work as expected.&lt;/p&gt;

&lt;p&gt;Docker comes to solve this problem by simplifying the process of deploying an application by &lt;strong&gt;packaging it, with all of its dependencies, and running it in an isolated environment&lt;/strong&gt;, making the process very easy and efficient.&lt;/p&gt;

&lt;p&gt;Although Docker can be present in the whole workflow of software development, its main use is during the deployment.&lt;/p&gt;

&lt;p&gt;This way, Docker separates your application in this standardised unit that we call a container.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is a container?
&lt;/h1&gt;

&lt;p&gt;Remember we said Docker &lt;strong&gt;packages and runs your application in an isolated environment&lt;/strong&gt;. This is what we call a container.&lt;br&gt;
Containers offer a packaging mechanism in which applications can be abstracted from the environment in which they actually run, giving developers the possibility to create predictable environments. &lt;strong&gt;The container becomes the unit for testing your application&lt;/strong&gt;.&lt;/p&gt;
&lt;h1&gt;
  
  
  Why choose Docker?
&lt;/h1&gt;
&lt;h2&gt;
  
  
  Docker vs. VMs
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Virtual machines&lt;/strong&gt;&lt;/em&gt; (VMs) are an abstraction of physical hardware turning one server into many servers. A hypervisor is computer software, firmware, or hardware that creates and runs VMs, allowing several of them to run on a single machine. Each VM includes a full copy of the operating system kernel, the application, and the necessary libraries. VMs can also be slow to boot.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The &lt;em&gt;&lt;strong&gt;kernel&lt;/strong&gt;&lt;/em&gt; is the part of an operating system that handles memory management, resource allocation and other low-level services essential to the system.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--raFGq27R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://miro.medium.com/max/1050/0%2AFYLf2FmgeMdEN9QA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--raFGq27R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://miro.medium.com/max/1050/0%2AFYLf2FmgeMdEN9QA.png" alt="Docker vs. VMs" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Containers and virtual machines have similar resource isolation and allocation benefits, but &lt;strong&gt;function differently because containers virtualise the operating system instead of hardware&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Containers&lt;/strong&gt; are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space.&lt;/p&gt;
&lt;h2&gt;
  
  
  Benefits of Docker
&lt;/h2&gt;

&lt;p&gt;From an operations standpoint, it gives your infrastructure improved efficiency, which can lead to a &lt;strong&gt;better utilisation of the compute resources&lt;/strong&gt;. This allows us to run more containers on a given hardware combination than if you were using virtual machines.&lt;/p&gt;

&lt;p&gt;Going back to containers, A container is a &lt;strong&gt;runtime instance of a Docker image&lt;/strong&gt;. So basically, a Docker container consists of a Docker image, an execution environment, and a standard set of instructions. But, what is an image?&lt;/p&gt;
&lt;h1&gt;
  
  
  What is an image?
&lt;/h1&gt;

&lt;p&gt;As we saw, containers are runnable instances of an image. So, unlike a container, an image &lt;strong&gt;does not have state&lt;/strong&gt; and it never changes. An image is a &lt;strong&gt;template with instructions for creating a Docker container&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;From here, we are going to be following the example from the &lt;a href="https://github.com/dustytrinkets/docker-node-example"&gt;repository&lt;/a&gt; to build our node application and dockerise it.&lt;/p&gt;

&lt;p&gt;To start, we have the index.js file that exposes a get and returns the port in which we are running the application. We need to install express and dotenv as dependencies for this example.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;If we run the app and browse &lt;a href="http://localhost:3000"&gt;http://localhost:3000&lt;/a&gt; the server will return&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Your application is running on port 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;So the next question is, how do we build a Docker image?&lt;/p&gt;
&lt;h1&gt;
  
  
  What is a Dockerfile for?
&lt;/h1&gt;

&lt;p&gt;For building images, we use Dockerfile. This is a &lt;strong&gt;file with a simple syntax for defining the steps needed to create our image and run it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Often, it is an image &lt;strong&gt;based on another image&lt;/strong&gt;, with some additional customisation. This is what the Dockerfile, contains. So, in order to assemble our image, we are going to create a document containing all the commands we would need to build an image of our own application.&lt;/p&gt;

&lt;p&gt;We can create our own images, or use the ones created by others and published in a registry. For example, we can use any image published on &lt;strong&gt;&lt;a href="https://hub.docker.com/"&gt;Docker Hub&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We are going to build an image of our node application. For this, we could start &lt;strong&gt;FROM&lt;/strong&gt; an Ubuntu image, install Node on top of it, and our application afterwards, or directly start from a Node image.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Each instruction in a Dockerfile creates a layer in the image&lt;/strong&gt;, and when you change the Dockerfile and rebuild the image, &lt;strong&gt;only those layers that have changed are rebuilt&lt;/strong&gt;. This is what makes images so lightweight, small, and fast.&lt;/p&gt;

&lt;p&gt;We are going to start &lt;strong&gt;FROM&lt;/strong&gt; a Node image, and install and run our application from there as follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;After this, we have &lt;strong&gt;RUN&lt;/strong&gt; a pair of commands. The first installs Python as a build dependency. It is unnecessary for the dependencies this project has, but it is a good example to see how to insert build dependencies, that is, ones we need to build our application, but once the program is compiled we won’t be needing it anymore. The second one installs the dependencies for the project.&lt;/p&gt;

&lt;p&gt;In these examples, we &lt;strong&gt;COPY&lt;/strong&gt; the &lt;strong&gt;package.json&lt;/strong&gt; before the source code (&lt;code&gt;COPY . .&lt;/code&gt;). This is because Docker images are made up of layers, and since the file package.json does not change as often as our source code, we don’t want to keep rebuilding our &lt;strong&gt;node_modules&lt;/strong&gt; each time we run &lt;code&gt;docker build&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We are going to set the &lt;strong&gt;ENV&lt;/strong&gt; variables PORT to 3000 and the TYPE so we can see the differences when we run our dockerised app.&lt;/p&gt;

&lt;p&gt;After that, &lt;strong&gt;EXPOSE&lt;/strong&gt; tells Docker which port the container is listening on at runtime, in this case we are exposing port 3000.&lt;/p&gt;

&lt;p&gt;Finally, the &lt;strong&gt;CMD&lt;/strong&gt; command tells Docker how to run the application we packaged in the image. The CMD follows the format CMD [“command”, “argument”].&lt;/p&gt;

&lt;p&gt;If we now run the command &lt;code&gt;docker build .&lt;/code&gt; we build the image we just created. We can also run &lt;code&gt;docker build --tag myapp .&lt;/code&gt; if we want to tag the image we just created.&lt;/p&gt;

&lt;p&gt;We can now see the image we just built with the command &lt;code&gt;docker images&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To increase the build’s performance, we should avoid the files and directories from being included in the image by adding a &lt;code&gt;.dockerignore&lt;/code&gt; file to that directory as well. In our case, we have ignored all the files we won't be needing.&lt;/p&gt;

&lt;p&gt;We can now run &lt;code&gt;docker run -p 3001:3000 myapp&lt;/code&gt;. This way, we are mapping our host port 3001 to the container port 3000. The pattern is &lt;code&gt;HOST:CONTAINER&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So now, when we go to &lt;a href="http://localhost:3001"&gt;http://localhost:3001&lt;/a&gt;, the server will now return:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Your built with Dockerfile application is running on port 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  What is docker-compose for?
&lt;/h1&gt;

&lt;p&gt;Imagine we want to run two containers, one for our node application and the other for a database in which we will store some information, for example MongoDB. This is when docker-compose becomes useful.&lt;/p&gt;

&lt;p&gt;docker-compose defines a &lt;code&gt;docker run&lt;/code&gt; command. This is a set of steps to create and run our container. We define a multi-container application in a single file, then spin our application up in a single command which does everything that needs to be done to get it running.&lt;/p&gt;

&lt;p&gt;First of all, make sure you install docker-compose on your machine and add this docker.compose.yml file:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;We are here giving instructions to build two images, one for &lt;strong&gt;&lt;em&gt;myapp&lt;/em&gt;&lt;/strong&gt; and one for the &lt;strong&gt;&lt;em&gt;mymongo&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In the service &lt;strong&gt;&lt;em&gt;myapp&lt;/em&gt;&lt;/strong&gt; we are telling docker-compose to use the Dockerfile in the same directory (the . from the &lt;code&gt;build .&lt;/code&gt; indicates to run the build command, the Dockerfile, is in the current working directory).&lt;/p&gt;

&lt;p&gt;We are also telling that &lt;em&gt;myapp&lt;/em&gt; &lt;code&gt;depends_on&lt;/code&gt; &lt;em&gt;mymongo&lt;/em&gt;, so &lt;em&gt;myapp&lt;/em&gt; won’t run until mymongo does.&lt;/p&gt;

&lt;p&gt;With the &lt;code&gt;ports&lt;/code&gt; instruction we are again mapping the port exposed to 3001 as we did on the &lt;code&gt;docker run&lt;/code&gt; command manually before.&lt;/p&gt;

&lt;p&gt;We set the &lt;strong&gt;&lt;em&gt;environment&lt;/em&gt;&lt;/strong&gt; variables &lt;strong&gt;PORT&lt;/strong&gt; and &lt;strong&gt;TYPE&lt;/strong&gt; so that when we run the command &lt;code&gt;docker-compose&lt;/code&gt; up and check on &lt;a href="http://localhost:3001"&gt;http://localhost:3001&lt;/a&gt; we should see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Your built and run with docker-compose application is running on port 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command &lt;code&gt;docker-compose up&lt;/code&gt; gives Docker the instructions to build the images and run the container according to the docker-compose.yml.&lt;/p&gt;

&lt;p&gt;The command &lt;code&gt;docker-compose down&lt;/code&gt; shuts down all the services run by the previous script.&lt;/p&gt;

&lt;p&gt;As &lt;code&gt;docker ps&lt;/code&gt; lists all running containers in Docker engine, &lt;code&gt;docker-compose ps&lt;/code&gt; lists containers related to images declared in &lt;code&gt;docker-compose file&lt;/code&gt;, so the result of &lt;code&gt;docker-compose ps&lt;/code&gt; is a subset of the result of &lt;code&gt;docker ps&lt;/code&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Docker Command Line Cheat Sheet
&lt;/h1&gt;

&lt;p&gt;Here are some useful Docker commands explained:&lt;br&gt;
▶ &lt;code&gt;docker build --tag tagname .&lt;/code&gt;→ Build an image from the Dockerfile in the current directory and tag the image. Example: docker build --tag myapp .&lt;br&gt;
▶ &lt;code&gt;docker run -d -p 80:80 tagname service nginx start&lt;/code&gt; → Example: docker build --tag myapp .&lt;br&gt;
▶ &lt;code&gt;docker ps&lt;/code&gt; → Check the running containers.&lt;br&gt;
▶ &lt;code&gt;docker ps -a&lt;/code&gt; → Show all containers (default shows just running ones).&lt;br&gt;
▶ &lt;code&gt;docker exec -it containername bin/sh&lt;/code&gt; → Enter the console from a given docker container.&lt;br&gt;
▶ &lt;code&gt;docker images&lt;/code&gt; → See local built images&lt;br&gt;
▶ &lt;code&gt;docker images -a&lt;/code&gt; → See all images locally stored, even the intermediate images. Remember each Docker image is composed of layers, with these layers having a parent-child hierarchical relationship with each other. Docker calls this an intermediate image.&lt;br&gt;
▶ &lt;code&gt;docker image rm imagename&lt;/code&gt; → Remove an image.&lt;br&gt;
▶ &lt;code&gt;docker stop containername&lt;/code&gt; → Stop a container.&lt;br&gt;
▶ &lt;code&gt;docker rm containername&lt;/code&gt; → Remove a container.&lt;br&gt;
▶ &lt;code&gt;docker-compose -f path/to/docker-compose.yml up&lt;/code&gt; → Create and start a container specified on a docker compose file. Example: docker-compose -f docker/docker-compose.yml up&lt;br&gt;
▶ &lt;code&gt;docker-compose -f path/to/docker-compose.yml down&lt;/code&gt; → Stop and remove containers, networks, images and volumes. Example: docker-compose -f docker/docker-compose.yml down&lt;/p&gt;

</description>
      <category>node</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
