If, like the rest of the world, you find yourself needing to Dockerize a Node app, you might have found a few quick Dockerfile
's on Github, slapped that into your project and run your commands. Sure, it probably even worked without much effort. But, have you ever stopped to wonder how good they are? Well, after having to do this myself a few times and wondering the same thing, I found a few tips that give quick wins to any Node-Docker project.
1: Only Install Production Dependencies
When running npm install
you get all dependencies without any optimisations by the libraries themselves. Changing this out in the final install command to npm ci --production
or npm ci --omit=dev
will install only your existing dependencies as defined in your package-lock.json
and is meant to be a repeatable build/install step and will throw an error if your lock file does not match your package.json
.
A free tip: In production environments/builds, set the environment variable NODE_ENV
to production
to enable a ton of helpful Node features specific to production environments.
One final step to making your production builds even better, run npm prune --production
to remove any extraneous packages.
This command removes "extraneous" packages. If a package name is provided, then only packages matching one of the supplied names are removed.
Extraneous packages are those present in the
node_modules
folder that are not listed as any package's dependency list.
2: Don't Run Docker Containers As Root
If an image does not specify another user with the USER
directive, it will default to the root user, which is pretty bad from a security perspective. Node images provide a "user of least privilege", appropriately called node
. This can be set as the image user with the USER node
directive, however, this user also needs to have ownership of all copies files, which it does not automatically, even with the USER
directive. To do so this, you can provide a --chown
argument to the COPY
directive like so: COPY --chown=node:node package.json ./
3: Properly Handle OS Events
This is something that I was entirely unaware of, but, when a Docker container is started with something like CMD npm run start
, this starts the process with a process ID (PID) 1, which Linux treats differently to any other PID. This means that it gets treated as an init system which is typically responsible for initialising an OS and other processes. This special treatment from the kernel means that the handling of a SIGTERM
signal to a running process won't invoke a default fallback behaviour of killing the process if the process already set a handler for it.
To quote the Node.JS Docker Working Group recommendation:
Node.js was not designed to run as PID 1, which leads to unexpected behaviour when running inside Docker. For example, a Node.js process running as PID 1 will not respond to
SIGINT
(CTRL-C) and similar signals.
The way to go about handling this properly is to use an init system which handles all of these signals properly. One such tool is dumb-init, a tool developed by Yelp that is statically linked and has a small footprint. So the new start CMD
becomes:
RUN apk add dumb-init
ENTRYPOINT [ "dumb-init", "node", "/dist/app.js" ]
This ensures that the process starts, but that the container still handles all of the SIG*
events properly.
4: Multi-stage Typescript Building
Like many of you, I've been enamoured with Typescript. I came from a Python background, then a little C++ and starting with Typescript was like the perfect union between the ease of writing of standard Javascript/Python and the type-safety and resilience of C++. I was in love.
How to Dockerize it properly, it turns out, is pretty straightforward if you're already a Docker user. Just use multi-stage builds.
Since Typescript requires a build/compile/transpile step, separating this out into a separate build stage means that you can use all of your build dependencies and artefacts and then discard them after you're done. Making the end app leaner and cleaner.
Conclusion
And that's it. 4 (3 if you don't use Typescript) really easy tips for helping improve your Dockerized Node app. The last step can be useful for almost any build process since it allows for greater control over what ends up in your final image and I'm sure we all know it's far too easy to just install everything you might need, final image size be damned, but leaner images means faster deploy times and easier storage.
Header by frank mckenna on Unsplash
Top comments (0)