Welcome to this guide on how to dockerize your Node.js application! Docker is an incredibly powerful tool that allows you to package your application with all its dependencies into a standardized unit called a container. This makes it easier to deploy and run your application consistently across different environments.
In this article, we'll walk through a sample Dockerfile and docker-compose.yml file, explaining each relevant section and why it's used. So, let's dive in!
Dockerfile
FROM node:18.16.1-alpine3.18 as base
# Create Directory for the Container
WORKDIR /usr/src/app
# Copy package.json and package-lock.json
COPY package*.json /
# Expose API Port
EXPOSE 3000
# ---------------------- START DEVELOPMENT CONFIGURATION -----------------------
FROM base as development
ENV NODE_ENV development
# Copy all other source code to work directory
COPY --chown=node:node . .
# Run npm and install modules
RUN npm ci
USER node
# Run start development command
CMD ["npm", "run", "start:dev"]
# ----------------------- END DEVELOPMENT CONFIGURATION ------------------------
# ----------------------- START PRODUCTION CONFIGURATION -----------------------
FROM base as production
ENV NODE_ENV production
# Copy all other source code to work directory
COPY --chown=node:node . .
# Run npm and install production modules
RUN npm ci --only=production
USER node
# Run start production command
CMD ["node", "bin/www/index.js"]
# ------------------------ END PRODUCTION CONFIGURATION ------------------------
The Dockerfile is a text file that contains instructions on how to build a Docker image. It defines the environment, dependencies, and commands needed to run your application inside a container. Here's the breakdown of the Dockerfile provided:
FROM node:18.16.1-alpine3.18 as base
- We start by specifying our base image. In this case, we're using the
node:18.16.1-alpine3.18
image, which includes Node.js installed on an Alpine Linux distribution. Using Alpine as the base image keeps the image size small.
WORKDIR /usr/src/app
- Next, we set the working directory inside the container to
/usr/src/app
. This is where our application code will be copied.
COPY package*.json /
- We copy the
package.json
andpackage-lock.json
files from our local machine to the root directory inside the container. This allows Docker to take advantage of its caching mechanism for faster builds.
EXPOSE 3000
- We expose port 3000 to allow communication with the containerized application.
FROM base as development
ENV NODE_ENV development
- Here, we define a new build stage named
development
based on thebase
stage. We set theNODE_ENV
environment variable todevelopment
.
COPY --chown=node:node . .
- We copy all the source code from our local machine to the working directory inside the container. The
--chown=node:node
flag ensures that the copied files are owned by the non-rootnode
user, improving security.
RUN npm ci
- This command runs
npm install
to install the dependencies specified in thepackage.json
file.
USER node
- We switch the user to the non-root
node
user for improved security.
CMD ["npm", "run", "start:dev"]
- Finally, we set the command that will be executed when the container starts. In this case, it runs
npm run start:dev
, which is a custom command defined in thepackage.json
file.
The above configuration sets up the development environment in the Docker container. Now, let's take a look at the production configuration.
FROM base as production
ENV NODE_ENV production
- Similar to the development stage, we define a new build stage named
production
based on thebase
stage. TheNODE_ENV
environment variable is set toproduction
.
COPY --chown=node:node . .
- We copy all the source code again, including any additional files, to the working directory inside the container.
RUN npm ci --only=production
- Instead of running
npm install
, we usenpm ci
to install only the production dependencies, skipping the development dependencies. This ensures a lean and optimized production image.
USER node
- We switch to the non-root
node
user for improved security.
CMD ["node", "bin/www/index.js"]
- Finally, we set the command to run the production server using the
node
command. It executes theindex.js
file located in thebin/www/
directory.
That's it for the Dockerfile! Now, let's move on to the docker-compose.yml file.
docker-compose.yml
Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to specify the services, dependencies, and configurations needed to run your application stack. Here's an explanation of the provided docker-compose.yml file:
version: "3.8"
services:
rest-api:
container_name: rest-api-app
restart: on-failure
build:
context: ./
target: production
volumes:
- /etc/localtime:/etc/localtime:ro
- .:/usr/src/app
command: npm run start
ports:
- "3000:3000"
environment:
NODE_ENV: production
PORT: 3000
HOSTNAME: 0.0.0.0
We specify the version of the Compose file format we're using as
3.8
.Under the
services
section, we define a service namedrest-api
, which represents our Node.js application.We set the
container_name
torest-api-app
for easier identification.The
restart
option is set toon-failure
, which means the container will automatically restart if it fails.The
build
section specifies how to build the service. We set thecontext
to the current directory and thetarget
toproduction
, which corresponds to the production stage in the Dockerfile.The
volumes
section defines the volume mappings between the host machine and the container. It maps the current directory (.
) to the/usr/src/app
directory inside the container, allowing live code reloading during development.The
command
specifies the command to start the service. In this case, it runsnpm run start
, which is defined in thepackage.json
file.The
ports
section maps port3000
of the container to the host machine, allowing access to the API.The
environment
section sets environment variables required by the application, includingNODE_ENV
,PORT
, andHOSTNAME
.
docker-compose.override.yml
The docker-compose.override.yml
file is an optional override file that allows us to modify the base docker-compose.yml
configuration. Here's a breakdown of the overridden sections:
version: "3.8"
services:
postgres:
image: "postgres:15.3-alpine3.18"
container_name: rest-api-app-database
restart: on-failure
ports:
- "5432:5432"
volumes:
- "./temp/postgres/data:/var/lib/postgresql/data"
environment:
POSTGRES_DB: rest-api-app
POSTGRES_USER: rest-api-app
POSTGRES_PASSWORD: ujt9ack5gbn_TGD4mje
rest-api:
build:
context: ./
target: development
command: npm run start:dev
volumes:
- /usr/src/app/node_modules/
ports:
- "9229:9229"
environment:
NODE_ENV: development
DATABASE_URL: "PROVIDER://USER:PASSWORD@HOST:PORT/DATABASE?schema=SCHEMA"
TOKEN_SECRET: ERN7kna-hqa2xdu4bva
EXPIRES_IN: 3600
links:
- postgres
We define a service named
postgres
to represent a PostgreSQL database container. It uses thepostgres:15.3-alpine3.18
image and sets the necessary environment variables for the database. The container is namedrest-api-app-database
.The
rest-api
service is overridden to use the development stage in theDockerfile
. This allows us to run the application in a development environment with additional features like hot-reloading.target: development
specifies that the build stage nameddevelopment
from theDockerfile
should be used to build therest-api
service. This allows you to customize the container's behavior specifically for development purposes.We map port
9229
of the container to the host machine for debugging purposes.The
environment
section defines additional environment variables, such asDATABASE_URL
,TOKEN_SECRET
, andEXPIRES_IN
, which are required for the application to function correctly.The
links
section establishes a link between therest-api
service and thepostgres
service, enabling communication between them.
Starting our Containers
Make sure you have Docker installed on your machine. If you don't have it installed, you can read my previous article where I explain how to install Docker and Docker Compose on Ubuntu Windows Development Environment.
Open a terminal or command prompt and navigate to the directory where your
docker-compose.yml
file is located.Once you are in the directory with the
docker-compose.yml
file, run the following command:
docker-compose up -d
This command will build the necessary Docker images, create and start the containers according to the configuration specified in the docker-compose.yml
and docker-compose.override.yml
files.
The -d
flag stands for "detached" and instructs Docker Compose to run the containers in the background. This allows you to continue using the terminal without being attached to the container's output.
By running Docker Compose in detached mode, you can easily manage and control your application containers while having the flexibility to continue working on other tasks in your terminal.
If everything goes well, you should see the logs from the containers in the terminal. Look for any error messages or warnings during the startup process.
After the containers have started successfully, you can access your Node.js application by opening a web browser and visiting http://localhost:3000
. This assumes that port 3000 is not already in use on your machine.
To check the status of the running containers, you can use the following command:
docker-compose ps
This command will display the status of each service defined in the docker-compose.yml
file, including the container names, the ports mapped to the host machine, and their current status.
To stop and remove the containers created by Docker Compose, you can use the following command:
docker-compose down
This command will stop and remove the containers, as well as any networks and volumes created by Docker Compose.
Remember, if you make any changes to your code or configuration files, you can simply rerun the docker-compose up -d
command to rebuild and restart the containers with the updated code.
To better and easier manage our containers, I use Lazydocker; For an explanation of the tool and how to install it, you can read my previous article where I explain how to install and manage Lazydocker in Ubuntu Windows Development Environment.
That's it! With this Dockerfile
and docker-compose.yml
configuration, you can easily containerize your Node.js application, making it portable and consistent across different environments. Happy containerized development!.
I hope you enjoyed this guide to Dockerizing your Node.js application. If you have any questions or feedback, feel free to leave a comment below or message me at Twitter or LinkedIn. Happy coding!.
Top comments (0)