loading...
Cover image for Using Docker for Node.js in Development and Production

Using Docker for Node.js in Development and Production

alex_barashkov profile image Alex Barashkov ・6 min read

My current primary tech stack is Node.js/Javascript and, like many teams, I moved our development and production environments in to Docker containers. However, when I started to learn Docker, I realized that most articles focused on development or production environments and could find nothing about how should you organize your Docker configuration to be flexible for both cases.

In this article, I demonstrate different use cases and examples of Node.js Dockerfiles, explain the decision making process, and help envision how your flow should be using Docker. Starting with a simple example, we then review more complicated scenarios and workarounds to keep your development experience consistent with or without Docker.

Disclaimer: This guide is large and focused on different audiences with varying levels of Docker skills; at some points, the instructions stated will be obvious for you, but I will try to make certain relevant points alongside them in order to provide a complete vision of the final set up.

Prerequisites

Described cases

  • Basic Node.js Dockerfile and docker-compose
  • Nodemon in development, Node in production
  • Keeping production Docker image away from devDependecies
  • Using multi-stage build for images required node-gyp support

Add .dockerignore file

Before we start to configure our Dockerfile, let’s add a .dockerignore file to your app folder. The .dockerignore file excludes during the COPY/ADD command files described in the file. Read more here

node_modules
npm-debug.log
Dockerfile*
docker-compose*
.dockerignore
.git
.gitignore
README.md
LICENSE
.vscode

Basic Node.js Dockerfile

To ensure clear understanding, we will start from basic Dockerfile you could use for simple Node.js projects. By simple, I mean that your code does not have any extra native dependencies or build logic.

FROM node:10-alpine

WORKDIR /usr/src/app

COPY package*.json ./
RUN npm install

COPY . .

CMD [ "npm", "start" ]

You will find something like this in every Node.js Docker article. Let’s briefly go through it.

WORKDIR /usr/src/app

The workdir is sort of default directory that is used for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions. In some articles you will see that people do mkdir /app and then set it as workdir, but this is not best practice. Use a pre-existing folder/usr/src/app that is better suited for this.

COPY package*.json ./
RUN npm install

Here’s another best practice adjustment: Copy your package.json and package-lock.json before you copy your code into the container. Docker will cache installed node_modules as a separate layer, then, if you change your app code and execute the build command, the node_modules will not be installed again if you did not change package.json. Generally speaking, even if you forget to add those line, you will not encounter a lot of problems. Usually, you will need to run a docker build only when your package.json was changed, which leads you to install from scratch anyway. In other cases, you don’t run docker build too often after your initial build in the development environment.

The moment when the docker-compose comes in

Before we start to run our app in production, we have to develop it. The best way of orchestrating and running your docker environment is using docker-compose. Define a list of containers/services you want to run and instructions for them in an easy to use syntax for further running in a YAML file.

version: '3'

services:
  example-service:
    build: .
    volumes:
      - .:/usr/src/app
      - /usr/src/app/node_modules
    ports:
      - 3000:3000
      - 9229:9229
    command: npm start

In the example of a basic docker-compose.yaml configuration above, the build done by using Dockerfile inside your app folder then your app folder is mounted to the container and node_modules that are installed inside the container during the build will not be overridden by your current folder. The 3000 port is exposed to your localhost, assuming that you have a web server running. 9229 is used for exposing the debug port. Read more here.

Now run your app with:

docker-compose up

Or use VS code extension for the same purpose.

With this command, we expose 3000 and 9229 ports of the Dockerized app to localhost, then we mount the current folder with the app to /usr/src/app and use a hack to prevent overriding of node modules from the local machine through Docker.

So can you use that Dockerfile in development and production?
Yes and no.

Differences in CMD
First of all, usually you want your development environment app reloading on a file change. For that purpose, you can use nodemon. But in production, you want to run without it. That means your CMD(command) for development and production environments have to be different.

There are few different options for this:

1. Replace CMD with the command for running your app without nodemon, which can be a separate defined command in your package.json file, such as:

 "scripts": {
   "start": "nodemon --inspect=0.0.0.0 src/index.js",
   "start:prod": "node src/index.js"
 }

In that case your Dockerfile could be like this:

FROM node:10-alpine

WORKDIR /usr/src/app

COPY package*.json ./
RUN npm install

COPY . .

CMD [ "npm", “run”, "start:prod" ]

However, because you use docker-compose file for your development environment, we can have a different command inside, exactly as in the previous example:

version: '3'

services:
   ### ... previous instructions
    command: npm start

2. If there is a bigger difference or you use docker-compose for development and production, you can create multiple docker-compose files or Dockerfile depending on your differences. Such as docker-compose.dev.yml or Dockerfile.dev.

Managing packages installation
It’s generally preferable to keep your production image size as small as possible and you don’t want to install node modules dependencies that are unnecessary for production. Solving this issue is still possible by keeping one unified Dockerfile.

Revisit your package.json file and split devDependencies apart from dependencies. Read more here. In brief, if you run your npm install with --production flag or set your NODE_ENV as production, all devDependencies will not be installed. We will add extra lines to our docker file to handle that:

FROM node:10-alpine

ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}

WORKDIR /usr/src/app

COPY package*.json ./
RUN npm install

COPY . .

CMD [ "npm", “run”, "start:prod" ]

To customize the behaviour we use

ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}

Docker supports passing build arguments through the docker command or docker-compose. NODE_ENV=development will be used by default until we override it with different value. The good explanation you could find here.

Now when you build your containers with a docker-compose file, all dependencies will be installed, and when you are building it for production, you can pass build argument as production and devDependencies will be ignored. Because I use CI services for building containers, I simply add that option for their configuration. Read more here

Using multi-stage build for images requiring node-gyp support
Not every app you will try to run in Docker will exclusively use JS dependencies, some of them require node-gyp and extra native installed os libraries to use.

To help solve that problem we can use multi-stage builds, which help us to install and build all dependencies in a separate container and move only the result of the installation without any garbage to the final container. The Dockerfile could look like this:

# The instructions for the first stage
FROM node:10-alpine as builder

ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}

RUN apk --no-cache add python make g++

COPY package*.json ./
RUN npm install

# The instructions for second stage
FROM node:10-alpine

WORKDIR /usr/src/app
COPY --from=builder node_modules node_modules

COPY . .

CMD [ "npm", “run”, "start:prod" ]

In that example, we installed and compiled all dependencies based on the environment at the first stage,then copied the node_modules in a second stage we will use in the development and production environment.

The line RUN apk --no-cache add python make g++ may be different from project to project, likely because you will need extra dependencies.

COPY --from=builder node_modules node_modules

In that line, we copy a node_modules folder from first stage to a node_modules folder in second stage. Because of this, in the second stage, we set WORKDIR as /usr/src/app the node_modules will be copied to that folder.

Summary

I hope this guide helped you understand how to organize your Dockerfile and have it serve your needs for both development and production environments. We can sum up our advice as follows:

  • Try to unify your Dockerfile for dev and production environments; if it does not work, split them.
  • Don’t install dev node_modules for production builds.
  • Don’t leave native extension dependencies required for node-gyp and node modules installation in the final image.
  • Use docker-compose to orchestrate your development setup.
  • It's up to you what to choose for orchestration your Docker containers in production, it could be docker-compose, Docker Swarm or Kubernetes.

Discussion

pic
Editor guide
Collapse
yamalight profile image
Tim Ermilov

I'd suggest replacing npm install with npm ci for faster builds in your node Dockerfile 🤔

Collapse
alex_barashkov profile image
Alex Barashkov Author

Tried npm ci and got the bug which has not fixed yet github.com/npm/npm/issues/21007. So can't use it. Tested it on simple configuration - works well, but because of the bug can't use it with unified dev/prod configs. Will wait once they fix it and then test it properly. Especially weird that PR is already submitted with the fix, but nobody is even replied about the plans of merging it.

Collapse
yamalight profile image
Tim Ermilov

Huh, that's an annoying bug.

Why would you want to have node_modules as a volume though? 🤔

Thread Thread
alex_barashkov profile image
Alex Barashkov Author

When you mount app to the container it overrides completely destination folder, so your installed during build modules will be vanished. I want to keep them so use that hack to exclude node modules folder. Did not find any better solution for the time being.

Thread Thread
yamalight profile image
Tim Ermilov

I get that (we usually even add node_modules to .dockerignore to evade cross-platform compat issues). I'm just not entirely sure why you'd want to have node_modules as a volume since you run npm install during image build anyway. Am I missing something? 🤔

Thread Thread
alex_barashkov profile image
Alex Barashkov Author

.dockerignore only works on copy/add command during build time. But when you mount a folder it will override everything which were copied/installed to the container during the build.

it gives you 3 options:

  • install also modules at your local machine and they will present on container after mount. don't like it, you will also rely on your machine set up and cross platform problems will appear
  • use a hack described here and prevent overriding stackoverflow.com/questions/300438...
  • install node modules in a custom directory that also described by the link above
Thread Thread
yamalight profile image
Tim Ermilov

But you are using COPY in the example Dockerfiles in the article - that's what confuses me 😅
Or are you talking about using pre-built docker image for development using local code? Then it makes sense, but the whole approach is indeed quite cumbersome 🤔

Thread Thread
alex_barashkov profile image
Alex Barashkov Author

Goal: get a Dockerfile which fit for development on local machine.
Requirements: App should not rely on anything at your local machine despite of Docker installation and the app code.

For node.js app you need to have installed node_modules. So we need install it somewhere and it comes to the 3 points in the previous comment.

So, we happy to do npm install in Dockerfile because that good for both development and production environments. By default node_modules installs at the same as your app directory folder in our case /usr/src/app/node_modules. Modules installed during the build. Then because development on local machine requires that your changes in the code reflect on the app inside docker we mount our local folder with the app(where we don't have node_modules) to the container. It overrides the /usr/src/app in the container and app will not start without node_modules. To use node_modules which were installed during the build-time, there is a hack of using volume as described in stack overflow.

Thread Thread
yamalight profile image
Tim Ermilov

Ah, I finally get it! 😅
Thanks for the detailed explanation!

Collapse
alex_barashkov profile image
Alex Barashkov Author

Thanks a lot, that's why I'm writing articles :) because it's possible to get a feedback. Never heard about npm ci, reading about it now and going to check it over the weekend.

Collapse
vschoener profile image
Vincent Schoener

Thanks for the article. On my own, I'm already using Docker that way but I still didn't figure out the best way to have the node_modules folder available on the host and having my IDE working with for the autocomplete and more. (For TypeScript, for example, it's better to get the type from packages)

So I found a way during the install process, I have to install on my own the package locally but both node_modules could be different if my node version is different from my machine and the container, so it's already an issue here... And I know it's not how Docker is designed for but in this case, it could really be nice to have the files available.

Any idea? :)

Collapse
lewislbr profile image
Lewis Llobera

The only way I've found to solve that => stackoverflow.com/questions/510976...

Collapse
hamsterasesino profile image
Gabriel

Hi Alex, thanks for the excellent article.

I am developing something similar at work and I have a question regarding docker compose and shared volumes that I hope you could help me with.

Basically I designed the Docker Environment so the web application was split up between code and a proxy server (nginx).

The container holding the code creates a shared volume, and then the container running Nginx serves its contents.

I made it this way so it would be easier in the future to replace Nginx with other servers (e.g. Apache).

Now my question is: do you think it is appropriate to initialize the container holding the code as a service in the docker-compose file? Its purpose is only to create the shared volume (it stops immediately after that).

I am sorry if this comes across as a very noob question but I didn't find anything against or in favor of this approach.

Thank you,
Gabriel

Collapse
alex_barashkov profile image
Alex Barashkov Author

Hi Gabriel,

I'm not quite sure that I understood what's exactly in your service. For example if it's something like webpack/gulp website you build and then use that built data as a part of a nginx container I don't see any problem with that.

I also have in my microservices docker compose file for one project, one service which I execute with empty command, because I have to built and then use some commands through it via docker-compose run

Collapse
hamsterasesino profile image
Gabriel

That's exactly it. It is a container that only compiles the code via webpack/grunt.

Thanks!

Collapse
bmnds profile image
bmnds

Hi, great article! Thanks for sharing your knowledge with us.

I have a doubt about best practices for handling environment variables.
Reading your Dockerfile, i felt a little strange seeing NODE_ENV=development in builder and start:prod later on. I would expect one environment to be the 'default' and the other one would have to override what is needed. But I didn't see it in here, is the default one 'development' or 'production'?

I understand npm install works based on NODE_ENV variable, so whether development or production, it will work as expected.

Is there a similar solution for npm start? To run the correct command based on NODE_ENV variable?

Collapse
2anandkr profile image
anand kumar

you can have your npm scripts like : start:development and start:production
In the Dockerfile, you can use CMD [ "npm", “run”, "start:${NODE_ENV}" ].

Collapse
gparra profile image
Gleycer Parra

Hi, I have the same doub, if you have found a solution, please shared it, I would appreciate.

Collapse
pojntfx profile image
Felix Pojtinger

If someone gets the following error on a SELinux-enabled machine (such as Fedora GNU/Linux):

Error: EACCES: permission denied, scandir '/usr/src/app'
example-service_1 | (node:1) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'get' of undefined

change this:

      - .:/usr/src/app

to this:

      - .:/usr/src/app:z

This took some time to figure out, be sure to thank Stack Overflow ;)

Collapse
bcrisp4 profile image
Ben

Oh. My. God.

Thank you. I was close to literally pulling my hair out.

Collapse
cbrintnall profile image
Christian Brintnall

Although I agree Docker-Compose is the best local orchestration, Kubernetes reigns supreme for container orchestration. You should give that a shot next if you haven't already. Will make your deployments so much easier.

Collapse
alex_barashkov profile image
Alex Barashkov Author

We already use Kubernetes at productoin, docker-compose for development envs. Kubernetes now is another trend which hard to avoid.

Collapse
misterhtmlcss profile image
Roger K.

Could you follow up with an article on Kubernetes? That would be awe because you explained things really well. Docker looks so easy, but it’s not and i learned lots from you tonight. Super appreciate all your effort!

Collapse
lewislbr profile image
Lewis Llobera

Any idea why my node-modules in empty? 🤔 It annoys my IDE a lot 😅

Edit: fixed it with this approach => stackoverflow.com/a/61137716/8966651

Collapse
flaviobrusamolin profile image
Flavio Brusamolin

Same problem here

Collapse
david_j_eddy profile image
David J Eddy

Nice article Alex. Good to see other people care about environmental parity / Docker is not just for production. A couple points to share:

"...Replace CMD with the command for running your app without nodemon..." Checkout this article concerning ENTRYPOINT vs CMD. I found it super helpful, especially when writing my own images and need to change the execution command.

I look forward to your next article, keep up the good work!

Collapse
misterhtmlcss profile image
Roger K.

Link doesn’t work and I dug for the article on my iPad and couldn’t find it either. Any suggestions or alternatives?

Collapse
nicholascus profile image
Nikolay Geo

Hi Alex, great stuff, I've been working on something similar in my company for a quite while. Wanted to ask you one more thing on subject of your article. Could you get all Win, Linux & Mac based developers to use your docker based dev environment?

Collapse
orpheus profile image
ohryan

Hey Alex, thank you for the article, it's helped me get up and running with node/docker better than any other article so far. I'm brand new to Docker so a lot of this is still kind of confusing to me. Was hoping to get a few questions answered.

1) If using yarn, does it need to be installed into the Docker container first? I switched out npm with yarn in the examples you gave and it worked fine, but I don't know if it's just because I have yarn installed globally on my pc.

2) I don't really get the concept of having a Dockerfile (which you said we're supposed to set up best for both prod and dev environments) and a docker-compose file. If the docker-compose is used for dev, why does the dockerfile have to be configured for dev? I don't really understand when and how each of them are used relative to eachother.

3) While developing, do you have to continually rebuild the image as you add dependencies?

Thank you for your time and for the article, much appreciated!

Collapse
alex_barashkov profile image
Alex Barashkov Author

Hey @ohryan
1) yarn is the part of node docker image, that why it works for you
2) Actually I'm proposing try to unify Dev and prod dockerfiles if that's possible. In most of projects I worked in, they could be easily be the same
3) That's only one downside, after changing you dependencies you have to rebuild the image. Fortunately it mostly actively happens at the beginning of the project. But it always depends on your docker compose configuration, for example in my example goal was to make sure we only rely on docker on local machine, but with small changes you could change that approach to installation of node modules on local machine and then use them with docker.

Collapse
akhomy profile image
Andriy Khomych

dev.to/alex_barashkov/comment/8a92
"install node modules in a custom directory that also described by the link above"

And why not to use this solution?
IMHO, it sounds better for me. As it aligns with:
1) Docker best practices, for me it is more a volume data rather than app dependencies.
2) Your node modules are explicitly available and you could check source code for them without a problem.
3) All node modules are going to be installed similar way as without docker. Personally, it is more a plus, as without docker we've used the same way and docker is more for the managing system dependencies. As I say, IMHO.

Collapse
punisher97 profile image
msvargas

Hi, thanks for to post, in windows nodemon no working, you have propagation file change with nodemon --legacy-watch src/index.js

stackoverflow.com/questions/392396...

Collapse
sumitkumarrai profile image
Sumit Sampang Rai

Hi Alex,
I went through your tutorial, and all the steps went well. However, I got into a problem. When I build my production docker-compose file before the development docker-compose file, the app image could not find nodemon in it. If I build the development before the production, all the development modules are available in app image, and nodemon is available as well. Is it supposed to be so? Or did I miss something?
And another question, how to you install new dependencies in your images?

Collapse
aghost7 profile image
Jonathan Boudreau

My experience with this on larger projects is that the file share between OSX and the docker VM is too slow for development. You'll probably have to change of file share at some point. To solve this I ended up installing docker in a vagrant VM and using nfs (nfs server running from the Linux VM) to share the files.

Collapse
michjoh profile image
michjoh

Thx for the tips - very useful to setup effective dev configuration :)

Collapse
normancarcamo profile image
Norman Enmanuel

I find out that using nodemon inside the container works but slow on every change.
I can use it anyway but it's slow, how do you work with that?
Locally nodemon shines.

Collapse
Sloan, the sloth mascot
Comment deleted