DEV Community

Cover image for Dockerfile Best Practices for Developers
Pavan Belagatti
Pavan Belagatti

Posted on • Edited on • Originally published at harness.io

Dockerfile Best Practices for Developers

Docker Best Practices in 2022. This article will explain more about Docker, as well as how to write the optimal Docker file to build and deploy your applications

Software development has evolved and changed over the years for good reasons. Organizations, big or small, have discovered the value of modern software development practices, and cloud-native tools have created new conventions for digital transformation. This evolution includes the advent of containers as a means for organizations adopting microservice architectures to package software.

Before containers became a primary component of the modern application stack, developers found different ways to abstract and isolate code. These methods began as simple scripts, and then evolved into more robust solutions. Now, containers are a foundational component of most developers’ toolkits. Docker is currently the de-facto standard for packaging and deploying containerized applications at scale.

This article will explain more about Docker, as well as how to write the optimal Dockerfile to build and deploy your applications.

Docker

docker image blogImage credits: Docker

Since it’s better to deep dive slowly, we’re going to take some time describing Docker. With the popularity of virtualization and container technologies, software developers started looking for ways to speed up their software development processes. This is where Docker began to shine.

Docker is a lightweight virtualization technology that lets you build, ship, and run applications with containers. Each application is packaged as an individual container, which means it has its own kernel and can operate independently from other containers. As a result, Docker helped developers package their applications and streamline their software development processes.

This article will explain what Docker is, why it’s useful for software developers, how to create a Dockerfile for your application, and how to use it effectively in different environments. Developers can now create their own custom environment without worrying much or breaking anything in the process. This freedom means that we can write code in any way that makes sense for our project, package it, and ship it.

Dockerfile

When using containers to deploy your applications, one of the most important things that you must get right is your Dockerfiles. Dockerfiles are how you tell Docker how to build and deploy your containers. If they aren’t well-written or optimized for your needs, that can significantly impact how quickly and efficiently you can get new versions of your application up and running.

dockerfile image

In this article, we’ll look at some of the best practices for writing effective Dockerfiles. These tips will help you streamline the deployment process with Docker and make sure that any future maintenance is as simple as possible. Since there are many different ways to use Docker (and other similar tools), we won’t go into specific instructions about which commands to use in which circumstances. Instead, read these pointers as general guidelines that will help optimize your Dockerfile.

A Dockerfile is the primary file that helps you with the deployment process of your code in a Docker container. With a best practices guide in hand, creating and using Dockerfiles will be much easier and more streamlined.

Dockerfile Best Practices

Dockerfiles, as simple as they may seem, are super complex from a developer’s perspective. They must be documented with enough details so that anyone can run them and get a working container in the end. Comprehensive documentation of development processes is also a best practice, especially if your team (or organization) has more than one developer working on code simultaneously.

Here are some of the best practices to keep in mind

- Do not use your Dockerfile as a build script:

A Dockerfile is a set of instructions that can be used to create a custom image. It should never be used as a build script because it will make your builds unnecessarily long. When you must compile or bundle software in your Dockerfile, you should use the ADD instruction. This will copy the files necessary for compilation into the image before it starts running commands. This will let you keep the Dockerfile short and manage any dependencies required for compilation separately from the Dockerfile.

- Use ENV to define environment variables:

Setting environment variables is a best practice for Dockerfiles. Although it might seem like a small detail, defining your environment variables will make your containers more portable. This is because your environment variables are the only thing that can change from one execution to the next. Note that Docker containers are an abstraction of the software application, meaning that you can’t change anything about the Linux operating system within the container without changing the host system’s operating system. If you have a variable that must be different both inside and outside of your container, then you must define it using ENV.

- Commit your Dockerfile to the repository:

One of the best practices of Dockerfiles is committing them to your repository. This lets you easily and quickly reference it later without remembering all of the commands that you used or what their order was.

- Be mindful of your base image and its size:

One of the most important things to consider when creating your Dockerfile is the base image that you’re using. It will increase your Docker image's size if you have a lot of extraneous code. This will make it much slower for your container to start up or, even worse, prevent it from starting at all. The best way to avoid this is by being mindful of which packages and scripts you use. If it doesn’t seem necessary to include it in the base image, try and find a way to install it when the container starts up instead. This will save space on your container, thus making it run more quickly and efficiently.

- Do not expose secrets:

Never share or copy the application credentials or any sensitive information in the Dockerfile. Instead, use the .dockerignore file to prevent copying files that might contain sensitive information. The .dockerignore file acts as equivalent to the .gitignore file, and it lets you specify the files that you want the build process to ignore.

- ‍Be mindful of which ports are exposed:

When designing your Dockerfile, make sure that you know which ports are exposed. By default, Docker will expose all of the containers to a range of random internal ports. This is problematic, as it can expose critical services to the outside world and leave them open for attack. If you’re using a service that must be exposed to the public internet, then you must create an entry in the Dockerfile. This is done by adding 'EXPOSE' in your Dockerfile.

FROM node:14-alpine AS development
ENV NODE_ENV development
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3002
CMD [ "node", "app.js" ]
Enter fullscreen mode Exit fullscreen mode

FROM node:14-alpine AS development: From the base image node, with the version 14-alpine variant. We’re essentially using Node:14-alpine as the template for our image.

ENV NODE_ENV development: NODE_ENV is a system environment variable that Node exposes to running scripts. It’s used by convention to determine dev-vs-prod behavior. Here we have set it to development.

WORKDIR /app : Next, we’re setting up the working directory in the container to /app. We’re mentioning that the directory called app is going to hold our project [files and all of the related things].

COPY package.json . : We’re copying our package.json file to know the dependencies.

RUN npm install : This command installs all of the dependencies mentioned in our package.json file.

COPY . . : We copy all of the contents from our current working directory in the Docker image.

EXPOSE 3002 : We’re specifying to expose the port 3002.

CMD [“node”, “app.js”] : We’re specifying to run the application with the command ‘node app.js’.

A final best practice to remember is to keep your Dockerfile as simple as possible – as they are meant to be. Therefore, don't try to add unnecessary complexity. You should make it so simple that the other developer can easily understand and execute it without any help.

Conclusion

Docker has a huge community and will continue to thrive in the cloud-native ecospace with its numerous benefits for organizations to succeed in their cloud-native journey. Furthermore, Docker is a blessing for developers, and every organization wants to embrace DevOps best practices. We all continue to use Docker to build and ship our software through containers. Therefore, it becomes necessary that we know how a Dockerfile is created. I hope this article gave you a sufficient understanding of creating a Dockerfile with best practices.

What’s next? Roll up your sleeves with this simple Dockerizing your Node.js application tutorial to see how Dockerfile really works, and take Harness CD for a test run. Download the free trial.

Top comments (4)

Collapse
 
hutger profile image
hutger

worth mentionining that EXPOSE doesn’t set up any networking properties. It's more like a runtime configuration option explicitly stated in your Dockerfile documenting on which port(s) your service listens on.

Collapse
 
cteyton profile image
Cédric Teyton

Dear Pavan, FYI we've added your blog post on a GitHub project gathering content about best coding practices :)
github.com/promyze/best-coding-pra...

Collapse
 
mannyistyping profile image
Manny

I am curious, where are these best practices drawn from?

I'd love to better understand how these were aggregated!

Collapse
 
pavanbelagatti profile image
Pavan Belagatti

I am a new developer advocate and I keep reading a lot of articles. I do practice tutorials and there was a month where I just practised how to write Dockerfile properly. That is where I picked up these best practices while experimenting and reading.