Learn how to setup Node JS inside of a Docker container
Goals of this article
- Have a working NodeJS application
- Make the Node app resilient by ensuring the process does not exit on error
- Make the Node app easy to work with by auto restarting the server when code changes
- Utilize Docker to:
- Quickly set up a development environment that is identical to production.
- Easily be able to switch Node versions both locally and on a server
- All the other benefits of Docker
Prerequisites
- Docker already installed
- At least entry level Knowledge of Node and NPM
If you are the type of person that just wants to see the end result then maybe the github repo will suit you better
1. Get a simple Node app in place
We are going to use Express because of how easy it is to set up and the popularity of the framework.
In a clean directory let's start by initializing NPM, go ahead and run this command and follow the prompts (what you put in the prompts is not that important for this guide)
npm init
Install Express
npm install --save-prod express
Setup basic express server. The file below simply says to start a Node process that listens to port 3000 and responds with Hello World! to the / route.
src/index.js
const express = require('express')
const app = express()
const port = 3000
app.get('/', (req, res) => res.send('Hello World!'))
app.listen(port, () => {console.log(`Example app listening on port ${port}!`))
2. Setup Docker to run our Node app
We will be using a docker-compose.yml file in order to start and stop our Docker containers as opposed to typing long Docker commands. You can think of this file as a config file for multiple Docker containers.
docker-compose.yml
version: "3"
services:
app:
container_name: app # How the container will appear when listing containers from the CLI
image: node:10 # The <container-name>:<tag-version> of the container, in this case the tag version aligns with the version of node
user: node # The user to run as in the container
working_dir: "/app" # Where to container will assume it should run commands and where you will start out if you go inside the container
networks:
- app # Networking can get complex, but for all intents and purposes just know that containers on the same network can speak to each other
ports:
- "3000:3000" # <host-port>:<container-port> to listen to, so anything running on port 3000 of the container will map to port 3000 on our localhost
volumes:
- ./:/app # <host-directory>:<container-directory> this says map the current directory from your system to the /app directory in the docker container
command: "node src/index.js" # The command docker will execute when starting the container, this command is not allowed to exit, if it does your container will stop
networks:
app:
Now we have our config in place, let's start the docker container with this command. This just means start the containers defined in our config file and run them in the background (-d)
docker-compose up -d
Now you should be able to go to localhost:3000 in your browser and see Hello World!
You should also be able to verify that your container is running by running
docker ps
which should output the list of your running docker containers, something like
Useful docker commands for managing this container
List all running containers
docker ps
List all containers regardless of if they are running
docker ps -a
Start containers from a docker-compose.yml file in the same directory
docker-compose up -d
Stop containers from a docker-compose.yml file in the same directory
docker-compose stop
Restart containers from a docker-compose.yml file in the same directory
docker-compose restart
See the log files from your docker container
docker-compose logs -f
3. Make our application resilient
If you have worked with Node before then you probably know that if an error occurs in your application, like an uncaught exception, then it will shut down that Node process. That is *really bad news for us because we are bound to have a bug in our code and can not ever guarantee our code is 100% error free. The solution to this problem is usually another process that watches our Node app and restarts it if it quits. With so many solutions out there like linux's supervisord, the NPM package forever and PM2, etc... we will just need to pick one for this guide.
I am going to focus on PM2 since I am most familiar with it and it also comes with some other features besides process management such as file watching which will come in handy for our next section.
Install PM2
npm install --save-prod pm2
PM2 can be used through the command line but we are going to set up a simple config file much like we did with the docker-compose.yml file in order to prevent us from typing long commands repeatedly
ecosystem.config.js
const path = require('path')
module.exports = {
apps: [{
name: 'app',
script: 'src/index.js', // Your entry point
instances: 1,
autorestart: true, // THIS is the important part, this will tell PM2 to restart your app if it falls over
max_memory_restart: '1G'
}]
}
Now we should change our docker-compose.yml file to use PM2 to start our app instead of starting it directly from index.js.
docker-compose.yml (Only changed the command option)
version: "3"
services:
app:
container_name: app # How the container will appear when listing containers from the CLI
image: node:10 # The <container-name>:<tag-version> of the container, in this case the tag version aligns with the version of node
user: node # The user to run as in the container
working_dir: "/app" # Where to container will assume it should run commands and where you will start out if you go inside the container
networks:
- app # Networking can get complex, but for all intents and purposes just know that containers on the same network can speak to each other
ports:
- "3000:3000" # <host-port>:<container-port> to listen to, so anything running on port 3000 of the container will map to port 3000 on our localhost
volumes:
- ./:/app # <host-directory>:<container-directory> this says map the current directory from your system to the /app directory in the docker container
command: "npx pm2 start ecosystem.config.js --no-daemon" # The command docker will execute when starting the container, this command is not allowed to exit, if it does your container will stop
networks:
app:
It should be noted that changing your docker-compose.yml file will not affect already running containers. In order for your changes to take place you should restart your containers
docker-compose restart
Great we should now be back to a working app at locahost:3000 but now our app will not fall over when we have errors.
4. Make our application easy to develop on
You may have noticed that once a Node process has started then changing the code does not actually do anything until you restart that Node process, and for us that would involve restarting our Docker containers every time we make a change. Ewwwwwwwww that sounds awful. It would be ideal if we could have our Node process restarting for us automatically when we make a code change. In the past I have done things like bring in a file watching utility and using that file watching utility to restart Docker on file changes, or I would use Nodemon but that comes with some caveats when using Docker. Recently I have been using PM2 to restart my Node process when a file changes, and since we already have it pulled in from the previous step we won't have to install another dependency.
ecosystem.config.js (only added the watch option)**
const path = require('path')
module.exports = {
apps: [{
name: 'app',
script: 'src/index.js',
instances: 1,
autorestart: true,
watch: process.env.NODE_ENV !== 'production' ? path.resolve(__dirname, 'src') : false,
max_memory_restart: '1G'
}]
}
The config file above will now watch the src directory if we do not have the NODE_ENV environment variable set to production. You can test it out by changing your index.js file to print something else to the browser besides Hello World!. Again before this can work you need to restart your Docker containers, since you changed how PM2 is running the containers
docker-compose restart
It should be noted that restarting the Node process may take a second to finish up, if you want to watch to see when it is finished you could watch your Docker logs to tell when PM2 is done restarting your Node Process.
docker-compose logs -f
You will see something like this when your process has restarted
Wrapping Up
One of our goals was to be able to easily change Node versions, you can do this by changing the image option in the docker-compose.yml file.
Installing dependencies locally is done with your local NPM and Node version which can cause conflicts sometimes if your local versions are different than Dockers. It is safer to use the same Docker container to install your dependencies. You can use this command which will use that container to install dependencies and then remove it
docker run --rm -i -v <absolute-path-to-your-project-locally>:/app -w /app node:10 npm install
- As mentioned above having a different local version of Node than what Docker is running could be problematic. It is best to run commands inside of your container for consistency. You can go inside a container with
docker exec -it app bash
The above command will put you inside the container so you can continue to run your commands from there i.e. npm run start or npm run test
If you prefer not to go inside the container you could run the commands like this
docker exec -t app bash -c "npm run start"
Top comments (5)
Thanks for this article.
Great Article and thanks, but these mistakes must be resolved
Thank you for saving my time. 👍
I have little problem. Pm2 didnt see changes. I checked with 'docker-compose up" and nodeamon is waiting but do not response to my changes in "index.js".
is there a real in running your dockerized nodejs app with pm2 in production since most container orchestrators compose, swarm or k8s seem to be able to provide everything that pm2 provifes?