DEV Community

Cover image for Best Practices When It Comes to Writing Docker Related Files
Nick Janetakis
Nick Janetakis

Posted on • Originally published at

Best Practices When It Comes to Writing Docker Related Files

This article was originally posted on June 19th 2018 at:

Everything listed below is based on personal experience and opinions. These are things that have worked well for me and the clients I've worked with while freelancing over the years.

It's also worth mentioning that this list always changes based on new experiences and I'll be updating this post as new patterns / styles emerge.


  • Use Alpine as a base image unless you can't due to technical reasons
  • Pin versions to at least the minor version, example: 2.5-alpine not 2-alpine
  • Add a maintainer LABEL to keep tabs on who initially made the image
  • Only include ARG and ENV instructions if you really need them
  • Use /app to store your app's code and set it as the WORKDIR (if it makes sense)
  • When installing packages, take advantage of Docker's layer caching techniques
  • If your app is a web service, EXPOSE 8000 unless you have a strong reason not to*
  • Include a wget driven HEALTHCHECK (if it makes sense)
  • Stick to the [] syntax when supplying your CMD instructions

* This is explained in more detail below under the docker-compose.yml section.


  • List your services in the order you expect them to start
  • Alphabetize each service's properties
  • Double quote all strings and use {} for empty hashes / dictionaries
  • Pin versions to at least the minor version, example: 10.4-alpine not 10 alpine
  • Use . instead of $PWD for when you need the current directory's path
  • Prefer build: "." unless you need to use args or some other sub-property
  • If your service is a web service, publish port 8000 unless it doesn't make sense to
Alphabetizing your service's properties, really?

Yep. Some services require having a bunch of properties and not giving them a specific order means you'll likely order things in different ways across projects or even services.

It's just 1 more thing to think about, but by having them alphabetized you no longer have to waste brain cycles grouping things up manually and you can scan a list of properties quickly.

Exposing and Publishing on port 8000?

That last one requires an explanation. I work with Flask, Rails, Phoenix and Node apps on a pretty regular basis. Each of these use different ports by default for their app server.

Rails uses 3000, Flask uses 5000, Phoenix uses 4000 and most Express apps use 3000.

When it comes to accessing these services in your browser, it's confusing because you always think, wait is it localhost:3000 or localhost:4000?

By defaulting to 8000 for your web services there's no more second guessing yourself. If you have a multi-service app then increment the port by 1, such as 8001, 8002, etc..


  • Don't forget to create this file :D
  • Don't forget to add the .git folder
  • Don't forget to add any sensitive files such as .env.production

Example Apps for Popular Web Frameworks

I've put together a few example applications that stick to these best practices. You can find them all on

Fully working Docker Compose based examples that you can reference:

If you don't see your favorite web framework listed above, open up a PR! This repo is meant to be a community effort where we can work together to make high quality example apps that demonstrate Dockerizing popular web frameworks and libraries.

What are some of your best practices? Let me know below!

Top comments (11)

david_j_eddy profile image
David J Eddy

"...Alphabetize each service's properties..." And her I was thinking I was the only person who did this.

"Exposing and Publishing on port 8000" Would it not make more sense to use 8080, the official HTTP alternate port?

Right on with your points about .dockerignore, to many times I have seen a .git inside a container. Makes me sad 'cause some places do not include .git in the server config ignore declaration. Thus accessing a projects .git via the HTTP is very possible. Couple this with the (always) bad practice of putting credentials into tracked files means the applications is inherently insecure.

nickjj profile image
Nick Janetakis

You could choose 8080 if you want. I tend to reserve 8080 in the case where you might be running nginx or Apache behind a load balancer. You would typically listen on 8080 on those services and reserve 80/443 for your load balancer.

david_j_eddy profile image
David J Eddy

Very reasonable. With micro-service style applications becoming more and more popular starting at a flat even 8000 gives us at least 80 before reaching 8080 :).

Thread Thread
nickjj profile image
Nick Janetakis • Edited

I went with 8000 because 8000 has the least amount of zeros to still be associated to port 80 and be above port 1024 to avoid permission issues.

Or the less scientific reason (and the real reason I went with it) is because when you pronounce it out loud you can pronounce it like Leonidas screams "This is Spartaaaaaaaaa!".

So now you have an excuse to scream "eight thousaaaaaaaaand!". It's the only thing I think of now whenever I read or write port 8000 and it makes me internally smile every time.

Thread Thread
david_j_eddy profile image
David J Eddy

Hahaha! Love it!

dataphil_lib profile image
Philip H.

You might like to add an advice with respect to PoLP and the USER command here.

coajaxial profile image

THIS. Always add a user and don't run your app as root!

renorram profile image
Renorram Brandão

do you have a better explanation about running with a different user? i've being having a bad time trying to run a service with php-fpm + nginx

Thread Thread
coajaxial profile image
Fluttershy • Edited

A short tutorial on this:

Add a user:

RUN addgroup -g 1000 www \
    && adduser -D -u 1000 -G www www

In FPM case you have to run the master process of FPM as root, but you can run the actual pool as a specific user (PHP will have the permissions of that user then) by adding these lines:

user = www
group = www

On nginx you have the same problem, the main process will run as root, but the actual server can be run as a different user by adding following lines to the nginx.conf:

user www www;

BTW, one cool feature: The first user on linux gets the ID and GID 1000 (at least on my ubuntu machine). That's why I specifiy the ID and GID 1000 on the addgroup and adduser commands in the Dockerfile. This way you won't have any permission problems when mounting a folder on your machine into the docker machine. Both docker and the host have the same permissions on the volume :)


I guess there is a way to run nginx and fpm directly as user; My guess is that you have to set specific permissions to the binaries so they have permission to allocate a port on the machine.

Thread Thread
renorram profile image
Renorram Brandão

thanks for the answer :D, it worked great for me on my deepin machine, but on a case that the user is gonna run in a windows machine or macOS machine ? is there a way to make this work cross OS ?

chathula profile image
Chathula Sampath

Great write up! useful information!

Thank you for share this.