DEV Community

Karan Pratap Singh
Karan Pratap Singh

Posted on

Top 5 Docker Best Practices

In this article, we will go over some essential Docker best practices which will help us to optimize our images for better size, security and developer experience.

#1 Least Privileged User

By default if the user isn't specified in the Dockerfile, it will use the root user, which can be a big security risk as 99% of the time your application doesn't need a root user and this might make it easier for an attacker to escalate privileges on the host.

To avoid this simply create a dedicated user and a dedicated group in the Dockerfile and add the USER directive to use that user to run the application.

least-previleged

Note: Some images already come with a generic user, which we can use. For example, node.js images comes with a user called node.

#2 Multi-Stage Builds

Docker images are often much bigger than they have to be which ends up impacting our deployments, security and dev experience. Optimizing a build can be complex as it’s hard to keep your image clean and eventually, it gets messy, and hard to follow. We also end up shipping unnecessary assets like tooling, dev dependencies, runtime or compiler in our releases.

TLDR

The main idea is to separate the build stage from the runtime stage.

  • Derive from a base image with the whole runtime or SDK
  • Copy our source code
  • Install dependencies
  • Produce build artifacts
  • Copy the built artifacts to a much smaller release image

Below is an example for Go, the final production image is just round 10mb!
multistage-build

Note: To learn more about multi-stage builds, check my earlier article on Art of building small containers

#3 Scanning for Security Vulnerabilities

Security is an essential part of any application especially if you're working in a highly regulated industry such as healthcare, finance, etc. Lucky for us, docker comes with a docker scan command, which can scan our images for security vulnerabilities. It is recommend to use this as part of your CI setup.

$ docker scan <your-image>
Enter fullscreen mode Exit fullscreen mode

Here are scan results for node:17-alpine and node:17. As we can see node:17 with full OS distribution has tons of vulnerabilities.

good-example
bad-example

#4 Use smaller size Official Images

Let's be honest, no one likes pulling huge containers whether for development or during CI builds. While sometimes we have to use a container with full OS distribution for particular task but in my opinion, containers should be small and just act as an isolated wrapper around our applications when it comes to shipping code. Hence, it is recommended to use smaller images with leaner OS distributions, which only bundle the necessary system tools and libraries, also minimizing the attack surface and making sure that we have more secure images.

For example, using alpine often a common practice when it comes to optimizing image sizes. Alpine Linux is a security-oriented, lightweight Linux distribution based on musl libc and busybox.

small-images
dockerhub

#5 Use caching

As we know, docker images contains various layers and in a Dockerfile each command or instruction creates an image layer. So if we rebuild our docker image and the Dockerfile or the layer hasn't changed, Docker will just use the cached layer to build the image. This results in significantly faster image rebuilds.

Below is an example of how we can cache node_modules by taking advantage of docker layer caching. This can be implemented for multiple situations depending on our Dockerfile.
caching

Conclusion

In this article we discussed best practices for Docker, I hope this article was helpful and as always if you have any questions feel free to reachout to me on LinkedIn or Twitter.

Oldest comments (6)

Collapse
 
a_chris profile image
Christian

Ehi, how is the cache working? The RUN command executes npm install on the local system so it can use the installed modules to build the image, right?

Collapse
 
schollii profile image
schollii • Edited

No it runs it in the temp container it creates while building.

If the RUN command has not changed from previous docker build, docker pulls in the layer built previous time.

Details in the Docker best practices doc docs.docker.com/develop/develop-im....

Collapse
 
a_chris profile image
Christian

Thank you for the answer!

Collapse
 
localpath profile image
Garrick Crouch

Good ideas I would point out avoid chown as a layer since it can take a really long time to run recursively.

The COPY --chown=node:node . /app command has a chown flag built in for that very purpose that is performant

Collapse
 
karanpratapsingh profile image
Karan Pratap Singh

Great tip! Thank you 😊

Collapse
 
tinkermakar profile image
Makar

Plenty of good tips in a short write, thanks much!