In this article, we'll learn how you can start developing Node.js apps using Docker from the very beginning. It's not like you start scaffolding the project in your local machine (using npm init
) and later add Docker support, you won't even need to have Node.js installed on your machine. You'll only need to install Docker, and that's basically it. And I'll keep it as simple as possible for everyone to understand. So without further ado, let's dive right in.
We'll start by creating a docker-compose.yml file in an empty folder / directory (whatever you prefer to call it). In that file, let's put the following lines:
services:
npm:
image: node:lts-alpine
working_dir: /tmp/app
volumes:
- ./code:/tmp/app:rw
entrypoint:
- npm
Let's break down what we've written so far. So every docker-compose file starts with the key services
. Nested into it, we define all the necessary "service containers" we'll work with. Here, we've just added a so-called "utility" container. Why did we do that? Aha! Glad you asked. Remember that having Node.js installed on our machine is completely optional and we won't even need it? If we don't have Node.js installed, we don't have npm
binary executable either. That's why we had to create this service container, to make npm
binary available to us. Soon we'll see how we can use this container to initialize a new Node.js project and later install dependencies. Let's move on.
So we have our first utility container named npm
, although you can name it however you'd like. Nested into it, we have a few keys: image
, working_dir
, volumes
, and entrypoint
. image
defines which Docker image we're going to use. Since npm
binary comes with the Node.js installation, we've used a Node.js image (specifically LTS version with Alpine OS). Next, we set an arbitrary working directory to /tmp/app
. Since it'll be a thrown-away container, we use that temporary folder. That's entirely my preference, feel free to use any other folder of your choice, as long as you don't use any reserved path by the OS itself (i.e. /root
for the image we're using).
Next up, we've got volumes
. It takes an array of strings, and it follows a specific pattern: 3 segments delimited by :
. 1st part is the absolute / relative path of the host machine (your PC), 2nd part is the absolute path inside the container, and 3rd part represents the type of the volume (usually rw
- read-write or ro
- read-only). What it means is, we're mounting a path from our local machine (./code
- relative to the project folder) to a path inside the container (/tmp/app
- absolute path) in read-write mode. Notice that the path inside the container (middle segment in the string) matches with the one we defined as our working directory. It's mandatory that you keep it the same, and change this path to whatever you set as working_dir
if you chose a different path than mine. Finally, we have entrypoint
. It also takes an array of strings, and we set npm
. It has to be npm
, since this refers to the npm binary we used to execute in our local machine.
With everything setup correctly, we're now ready to run our first Docker command! Open up a new terminal window and navigate into the project folder, then run:
docker compose run --rm npm init
This command may look a bit familiar to you, especially the last 2 words. What we're doing here is we're telling Docker to "run" the service named "npm" with the "init" argument. So if you chose to name it other than "npm", you need to adjust the above command accordingly.
--rm
flag tells Docker to "remove" the container once it's done executing the command.
If you've done it correctly, you should be presented with the same interactive questionnaires when you ran "npm" locally on your machine. Follow the on-screen instruction to create a package.json file.
Notice that the file has been created inside
code
folder. Have you found any coincident? Let me know in the comments below!
Now, we'll install Express. Run the following command:
docker compose run --rm npm i express
Let's create app.js file inside code folder and add the following lines:
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.json({ status: 200, message: 'Hello, world!' });
});
app.listen(12345);
We have our little Node-Express app. Now, how are we gonna run it? We have to revisit dcker-compose.yml file again. Let's add another service, only this time it's gonna be an "application" container.
services:
app:
image: node:lts-alpine
working_dir: /usr/src/app
volumes:
- ./code:/usr/src/app:rw
ports:
- 8080:12345
command:
- npx
- nodemon
- -L
- app.js
npm:
image: node:lts-alpine
working_dir: /tmp/app
volumes:
- ./code:/tmp/app:rw
entrypoint:
- npm
The sequence doesn't matter here. However, I prefer to have all the utility container(s) after all the application container(s).
As you can see, we've added another service named "app". Again, this name here can be anything and you can choose a different one than mine. Notice that we've chosen a different working directory (which is just my personal preference) and swapped entrypoint
with command
. If I were to tell the difference between these two in one line, I'd say:
command |
entrypoint |
---|---|
command allows us to set a default command which will be executed when the container starts up. |
entrypoint is used to configure a container to be run as an executable. |
Most importantly, we've defined a brand-new key: ports
. It's a string, with 2 numbers delimited by :
. What it does is it maps the port from inside the container (later segment) on to the host machine (former segment). I have deliberately kept different port numbers, just to show you that you can mix and match the numbers however you'd like. You may have noticed that the container port number is the same as our Express app is being listened to, which is mandatory. If your Express app listens on a different port, you have to adjust the container port to match that. You can also see that in the app, container, we're executing the dev
script, which we already defined in our package.json file.
With all being setup, let's try to start our Express app. Run:
docker compose up -d app
The
-d
flag here just for running the container in "detached" mode, so that the terminal is freed and we can run additional command. You can omit that flag, then the terminal will be occupied as long as your Express app is running.
How do we now see our application sending responses? Do we visit http://localhost:12345 or http://localhost:8080? Remember, we mapped port 12345
from container to port 8080
on our host machine. So we have to visit http://localhost:8080 in order to see our application. If we were to visit the app from inside the container, we'd visit http://localhost:12345, because we'd be inside the container in that case. Hope that makes sense.
Thanks to the volumes
and "nodemon" library, the application server will be restarted every time we change files on our host machine (since the changes are immediately reflected back inside the container as well), and we've got ourselves a perfectly "dockerized" development environment. The COOL thing about this setup is: any bug you face while developing the app will be consistent across all platforms, be it Windows, Mac or Linux. You won't find yourself in a situation that a bug happens only on Linux and not on Mac and Windows.
When you're done working for the day, just run docker compose down
to shut down the application container.
One IMPORTANT note: any additional files you create, which are part of the application, must be inside the
code
folder.
In the next part, we'll see how we can add a database to our application so that we can persist any data our application generates. See ya there!
Top comments (0)