DEV Community

Cover image for Docker for nodejs
Rubin
Rubin

Posted on

Docker for nodejs

Docker is a great conterization platform with tons of out of the features out of the box. So, in this post we are going to skip the traditional hosting of app with packages like pm2 (although we can still use it inside docker).
First of all, we will start by making a Dockerfile. A Dockerfile is a way to package your application.
You can learn the basics on docker from the link

The content of the Dockerfile will be like this:


FROM node:10


WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 8080
CMD [ "node", "server.js" ]

Enter fullscreen mode Exit fullscreen mode

This will tell the docker engine to use the node:10 image and perform the steps.Though the file is self-explanatory, I will still do a lil bit of explaining

  • First it will pull the image from dockerhub if it cannot find it in the machine
  • Then it will use the directory /usr/src/app as the work directory for the project
  • Thirdly, it will copy package.json and package-lock.json into the work directory and perform npm install which will inturn install all the dependencies required

  • After, the dependencies are installed, it will copy all the files in the host machine to the container. Since we already have node_modules inside container, we may want to skip it. This can be done via .dockerignore file. Think of dockerignore same as gitignore but for docker

A sample .dockerignore file

# Logs
logs
*.log

# Runtime data
pids
*.pid
*.seed

# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov

# Coverage directory used by tools like istanbul
coverage

# Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files)
.grunt

# node-waf configuration
.lock-wscript

dist

node_modules

server/*.spec.js

Enter fullscreen mode Exit fullscreen mode
  • The expose command will open a port in the container follwed by the port number which is 8080 in our case. Make sure to match this with the port used by the app

  • CMD command will execute the command passed which is node server.js . It can even be a npm script like npm start . This should be the command that spins up the server

Building your image

Go to the directory that has your Dockerfile and run the following command to build the Docker image. The -t flag lets you tag your image so it's easier to find later using the docker images command:

docker build -t <your username>/node-web-app .
Enter fullscreen mode Exit fullscreen mode

Run the image

Running your image with -d runs the container in detached mode, leaving the container running in the background. The -p flag redirects a public port to a private port inside the container. Run the image you previously built:

docker run -p 49160:8080 -d <your username>/node-web-app
Enter fullscreen mode Exit fullscreen mode

However this approach doesn't reflect the changes that you made in your code after the image is built. So for every change you have to perform the build and run step again and again.

Luckily docker comes with something called volume mapping which instead of copying the file maps the working directory with the files from host machine. So every time a change occurs on any file in your app, it is automatically reflected inside the container as well and wont need to build the image again.
To use this approach , the dockerfile becomes


FROM node:10

WORKDIR /usr/src/app

COPY package.json .
RUN npm i
COPY . .

Enter fullscreen mode Exit fullscreen mode

Once you have modified the file, you can build the image as you did previously

To run the built image though, there is a slight change

docker run -p 49160:8080 -v $(pwd):/usr/src/app -d <your username>/node-web-app

Enter fullscreen mode Exit fullscreen mode

pwd is the command to get the current directory in linux so make sure to use the run command when you are inside the app directory

Top comments (7)

Collapse
 
yanikpei profile image
Yanik Peiffer

Usually I use the alpine version of the Node image. It still includes the basic components that are required for most projects. It saves you a lot of space and your image is smaller.

Collapse
 
zaffja profile image
Zafri Zulkipli • Edited

Does smaller image means faster resource handling in this case? I had a problem using alpine before (can't exactly remember what it is, sorry) and had to switch to ubuntu based image. It can handle resource as fast as alpine except that the image size is larger, compatibility wise ubuntu based image is much better. So to me, I prefer higher compatibility over image size.

Collapse
 
asdftd profile image
Milebroke

The cleanest approach would be a docker multi stage build. You can take whatever image you need for the build, then build the application in it. Afterwards you create your running image with an alpine version and take the deployment artifact (hopefully a single js file) from the building container and just run it.

"I prefer higher compatibility over image size."
-> higher compatibility would mean a bigger security attack surface. With multi stage you can have the best out of both worlds :)

docs.docker.com/develop/develop-im...

Collapse
 
rubiin profile image
Rubin • Edited

Yes that is true but some packages like bcrypt which are built by node-gyp from source causes problems in alpine

Collapse
 
dance2die profile image
Sung M. Kim

Being a docker newbie myself, would it make sense to start off with an alpine version and only use only if there is a problem?

Or should I stick with a full node image initially?

Thread Thread
 
yanikpei profile image
Yanik Peiffer

I like to keep my images small, because I store them online in my image registry. Whenever a node package doesn’t work and is definitely required I try out another node image. Worst option is the full node image. With this option the size of my docker images can be up to 1gb..

Thread Thread
 
dance2die profile image
Sung M. Kim

Been wondering why some of my images are gigantic!... I see that a layer choice makes a huge difference in terms of the result image size.

Thank you, Yanik