DEV Community

Cover image for Say Goodbye to Docker Volumes 👋
Jonas Scholz
Jonas Scholz

Posted on

Say Goodbye to Docker Volumes 👋

Ever tried to use Docker volumes for hot-reloading in your web app? If you had the same horrible experience as me, you will enjoy the newest feature that Docker just released: docker-compose watch! Let me show you how to upgrade your existing project to have a wonderful Docker dev setup that your team will actually enjoy using 🤩

TL;DR: Check out this docker-compose file and the official documentation

Let's get started!

spinning monke

Introduction

Docker just released Docker Compose Watch with Docker Compose Version 2.22. With this new feature, you can use docker-compose watch instead of docker-compose up and automatically synchronize your local source code with the code in your Docker container without needing to use volumes!

Let us take a look at how this works in a real-word project by using a project that I previously wrote about.

In this project, I have a monorepo with a frontend, backend, and some additional libraries for the UI and database.

├── apps
│   ├── api
│   └── web
└── packages
    ├── database
    ├── eslint-config-custom
    ├── tsconfig
    └── ui
Enter fullscreen mode Exit fullscreen mode

Both apps (api and web) are already dockerized and the Dockerfiles are in the root of the project (1, 2)

The docker-compose.yml file would look like this:

services:
  web:
    build:
      dockerfile: web.Dockerfile
    ports:
      - "3000:3000"
    depends_on:
      - api
  api:
    build:
      dockerfile: api.Dockerfile
    ports:
      - "3001:3000"from within the Docker network
Enter fullscreen mode Exit fullscreen mode

That's already pretty good, but as you already know it's a PITA to work with this during development. You will have to rebuild your Docker images whenever you change your code, even though your apps will probably support hot-reloading out of the box (or with something like Nodemon if not).

To improve this, Docker Compose Watch introduces a new attribute called watch. The watch attribute contains a list of so-called rules that each contain a path that they are watching and an action that gets executed once a file in the path changes.

Sync

If you would want to have a folder synchronized between your host and your container, you would add:

services:
  web: # shortened for clarity
    build:
      dockerfile: web.Dockerfile
    develop:
      watch:
        - action: sync
          path: ./apps/web
          target: /app/apps/web
Enter fullscreen mode Exit fullscreen mode

Whenever a file on your host in the path ./apps/web/ changes, it will get synchronized (copied) to your container to /app/apps/web. The additional app in the target path is required because this is our WORKDIR defined in the Dockerfile. This is the main thing you will probably use if you have hot-reloadable apps.

Rebuild

If you have apps that need to be compiled or dependencies that you need to re-install, there is also an action called rebuild. Instead of simply copying the files between the host and the container, it will rebuild and restart the container. This is super helpful for your npm dependencies! Let's add that:

services:
  web: # shortened for clarity
    build:
      dockerfile: web.Dockerfile
    develop:
      watch:
        - action: sync
          path: ./apps/web
          target: /app/apps/web
        - action: rebuild
          path: ./package.json
          target: /app/package.json
Enter fullscreen mode Exit fullscreen mode

Whenever our package.json changes we will now rebuild our entire Dockerfile to install the new dependencies.

Sync+Restart

Besides just synchronizing and rebuilding there is also something in between called sync+restart. This action will first synchronize the directories and then immediately restart your container without rebuilding. Most frameworks usually have config files (such as next.config.js) that can't be hot-reloaded (just sync isn't enough) but also don't require a slow rebuild.

This would change your compose-file to this:

services:
  web: # shortened for clarity
    build:
      dockerfile: web.Dockerfile
    develop:
      watch:
        - action: sync
          path: ./apps/web
          target: /app/apps/web
        - action: rebuild
          path: ./package.json
          target: /app/package.json
        - action: sync+restart
          path: ./apps/web/next.config.js
          target: /app/apps/web/next.config.js
Enter fullscreen mode Exit fullscreen mode

Caveats

As always, there is no free lunch and a few caveats 😬

The biggest problem with the new watch attribute is that the paths are still very basic. The documentation states that Glob patterns are not supported yet which can result in a huge amount of rules if you want to be specific.

Here are some examples of what works and what does not:

apps/web
This will match all the files in ./apps/web (e.g. ./apps/web/README.md, but also ./apps/web/src/index.tsx)

build/**/!(*.spec|*.bundle|*.min).js
Globs are sadly (not yet?) supported

~/Downloads
All paths are relative to the project root!

Next Steps

If you are still not happy with your Docker setup, there are still many ways to improve it!

Collaboration is a big part of software development and working in silos can be seriously damaging to your team. Slow Docker builds and complicated setups don't help! To counteract this and promote a culture of collaboration you can use Docker extensions such as Livecycle to instantly share your local docker-compose apps with your teammates. Since you are already using Docker and docker-compose, all you need to do is install the Docker Desktop Extension and click on the share toggle. Your apps will then be tunneled to the internet and you can share your unique URL with your team to get feedback! I've written more about that in this post if you want to check out more use cases of Livecycle :)

As always, make sure that your Dockerfile is following best practices, especially around multi-stage builds and caching. While this might make writing the initial Dockerfile harder, it will make your Docker apps a lot more pleasant to use during development.

Creating a basic .dockerignore file and separating dependency installation from code building goes a long way!

Conclusion

As always, I hope you learned something new today! Let me know if you need any help setting up your Docker project, or if you have any other feedback

Cheers, Jonas :D

Top comments (62)

Collapse
 
patroza profile image
Patrick Roza

Hard to believe there are people out there developing their apps in containers.
I would personally say goodbye to Docker (for developing) instead.

Collapse
 
cavo789 profile image
Christophe Avonture

For me there is today no other ways than using Docker containers. I'm working in a team and since I've introduced docker the " it doesn't work on my machine" didn't exists anymore.

And since all settings are in docker, a new colleague just need to run a very few commands and he can start to code.

I'll no more work like previously. Docker all the time for me.

Collapse
 
patroza profile image
Patrick Roza

This can be achieved without Docker via env managers, nix, or even just smart setup. When settings can live in docker files they can also live elsewhere

Thread Thread
 
cavo789 profile image
Christophe Avonture

Sorry I disagree. Using Docker I can create my own Docker images where I can foresee everything like, for PHP, each PHP modules to load, how they should be configured and so on. Then just give my docker-compose.yml file (and associated Dockerfile files) so everyone has the perfect setup.

Everything correctly configured, everyone using the same versions of tools and when I'll update f.i. php from 8.2 to 8.3 it'll be done for everyone at no cost. I'll make the change and everyone will get the newer version with no effort.

Docker has totally change how we're working.

Thread Thread
 
patroza profile image
Patrick Roza • Edited

I can see this is more complicated with stacks like that. Still you don’t need Docker for that. Nix will do that just fine for you, without containers. Now I’m personally not a huge fan of nix, but there are others too, it’s not limited to containers.
Docker is great for distribution, deployment, testing, having requirements to run many things together, but for a little bit of environment, setting and package management it’s like using an elephant to catch a mouse :)
With all complexion and issues like watchers and networking involved. Especially on non native systems like Mac and Windows.

That said, if it works for you/team, then it is hard to argue. It is the most important factor.

Thread Thread
 
cavo789 profile image
Christophe Avonture

Here too, I didn't agree since, under Windows, we can use WSL2 (Windows Subsystem for Linux).

I should admit I don't even what Nix is (and I'm not really curious about since Docker fullfill my needs right now).

We (my team) are using Docker on Windows machines, in a WSL2 environment. We're coding a major PHP application with a lot of dependencies, we've a backend, an api, a front using VueJS3, we've PostgreSQL and MySQL db, ... and everything is configured using Dockerfiles and a few docker-compose.yml ones.

Our projects are stored in our, self hosted, Gitlab environenment and all actions are available using a makefile so, when someone need to do something (like running pgadmin for instance), he didn't to remember which command to ... it's just make something like, here, make pgadmin.

All the complexity have been swallowed up by one member of the team, in charge of the maintenance of the framework and the others just enjoy by running very easy make something commands.

Yes, learning Docker is not easy but yes, I would never go back for local development. Everyone has the same installation, no more "it didn't work on my machine", etc.

Thread Thread
 
fiooodooor profile image
Milosz Linkiewicz

Don't get me wrong but both of you are arguing not knowing what a container is nor how it is achieved. First rule - there is no such thing as container - linux users agreed some time ago that using 'chroot', Linux namespaces, process and hardware isolation as well as network packages routing in sophisticated way should become standardized - that is how Docker emerged (this was simplification).

So a Dockerfile generates "images" with chrooted environment that is later managed by dockerd with added networking and in an isolated environment with own process tree etc. - we call such runtime a 'container'. But it is still a normal Linux process, almost no cost, no overhead - as simple as it can be. This no cost run vs VM virtualizations or other non-standarized tool usage win on all fronts - resources, speed, interoperability etc.

Please have this in mind next time :). P.s. Kubernetes is nothig more than another almost free-from-costs abstraction for networking and orchiestration. This is the main reason behind such boom in cloud computing - "there is no such thing like a container"

Thread Thread
 
insidewhy profile image
insidewhy

There is such a thing as a container though, there can only be one instance of a tagged image. A container is an instance of that image that can be executed.

Thread Thread
 
fiooodooor profile image
Milosz Linkiewicz

Only from meta-data point of view your statement would be partially true. Let me describe a little bit more. (p.s. it 6 a.m. I need to take some rest)
So again, this would be a misleading statement for a newbie. The words we use in Cloud Native environment are abstract and most of the time describe a set of rules/tools/functionalities etc. that produce/utilize os-kernel-level application/process management/isolation that from the inside seems like separate Operating System or VM witch it is not - this is still the 'host' Linux OS that most of the time will manage most of the OS API calls etc.

"Tagging" is nothing more then adding metadata for ease of use, nothing more. Instantiation/execution - when understood as isolated execution and dedicated process tree creation (and many other thing) - do not have any limitations regarding simultaneous runs/replications. There is of course "lots of magic" that is happening under-the-hood thanks to for example dockerd - I encourage everyone interested to watch at least some of the youtoube material by The Linux Foundation® and Cloud Native Computing Foundation

Thread Thread
 
patroza profile image
Patrick Roza • Edited

Only on Linux are you correct about no overhead, on Windows and Mac this is a different story, although huge strides have been made to optimise or abstract the virtualization.
On both systems still with limitations like no host networking support docs.docker.com/network/drivers/host/

Another point why I mentioned resource hog that counts as well for Linux, is that at least in our apps we usually have the luxury to replace out of process databases, queues, search engines or other services, with in process ones.
Generally leading to faster, less resource intensive, more transparent and certainly less complex (no process nor network boundaries etc) situation. Suddenly the value of Docker for local development is even further diminished.

Testing with the real systems then becomes at the integration/e2e test level, aka just before you merge a PR or at deployment, and just not promote it to receive traffic when tests fail etc. As to me those things often matter the least while I'm building domain logic or user experience.

Collapse
 
pftg profile image
Paul Keen

Cool, makes so sense.

It’s great that in nowadays we do not have such problems because of mature package managers.

Collapse
 
ryansgi profile image
Ryan B • Edited

I've read a few of your comments and I completely disagree. Docker is good for a few reasons:

  • Keeping development environments as close to testing & production environments as possible.
  • Onboarding new developers is a lot easier with init scripts and version controlled docker-compose files - it saves me from writing shell scripts or long documentation flows
  • Multiple developer OS (windows, Linux & Mac) is vastly easier. I program on windows with WSL, whereas some of my mates prefer whatever flavour of Linux they're using
  • Puts me in control of what Node/Go/Rust environment and version they're using. I don't have to worry as much about environment versions
  • Larger applications that have a lot of multi-language requirements are easy to setup and manage
  • Multiple projects that share similar setups or infrastructure can have their boilerplate version controlled, and utilize pre-created containers. Switching projects is a lot easier.
  • Back in the day, we used to have devs install something like XAMPP, edit their hosts files and so forth. I feel like you're advocating for a return to that kind of flow, and I definitely don't want to return to those days.

All in all, I find docker to be better for developer experience.

Collapse
 
patroza profile image
Patrick Roza • Edited

Yes, those are all great things docker provides, but as I wrote already a few times, docker and containers are not the only nor the best solution to it imo.
But if it provides you the value and you don’t run into issues then it’s good isn’t it :)

As to running as closely as possible to production, that’s nice, but ci can do that for you too. It is a form of integration/e2e testing to me. It’s an important part but mostly during development I like to be able to rely on the idea that externalities will do their thing correctly or at least adhere to their published specification (errors included) . They generally have little to do with my domain logic or ui.

Also I’m not saying never to use docker for development. But it shouldn’t be the default.

Thread Thread
 
fjones profile image
FJones

I always find the "as close to the other environments as possible" notion very misleading. What are we really trying to solve there?

(Virtual) hardware mismatch? Those are mostly bugs that would just as easily hurt us if our software had to move to different infrastructure.

Dependency mismatch? This is up to the developer to keep up to date anyway. If your IDE is autocompleting for an older version, docker ain't solving that for you either.

External services? You'll almost never be able to emulate the state of your redis or database. Most problems will be load or volume related (recent case in my team: queries that ran extremely fast on our test db, but got bogged down exponentially on production - with the inverse true for the queries we went with in the end. Why? Subqueries that ran much much faster on a smaller dataset, thus leading to a faster lookup on the main query. Complete opposite on production.) You can't see that with docker either.

Relying on system features not available elsewhere (e.g. Windows routines when targeting a nix environment)? Unless you're doing something that would be fairly esotheric these days, or building software that has very specific target environment constraints, docker is but one solution to this problem. Spinning up a VM, or on Windows simply compiling inside WSL, would be just as much a solution, and generally with less overhead than having to run a container.

The "similar environment" should be a late quality gate to ensure it's going to run on the target infrastructure, but it makes little sense in development.

Thread Thread
 
patroza profile image
Patrick Roza

Totally. Not an unimportant quality gate for sure, but not one I need to complicate my local dev setup with, 99% of the time.
It’s cool that I can though. That I have these tools accessible and use them, if and when I need them.

Collapse
 
araaranomi profile image
Ara Ara no Mi

Using docker for your development environment adds unnecessary complexity to your development process, stunting the progress. I mainly code on Windows and don't want to install WSL just to use docker desktop.

What's wrong with using Xampp? You could just use Laragon instead, it sets ups the hosts file automatically for each project folder there is on its www folder.

Thread Thread
 
patroza profile image
Patrick Roza

I think wsl is great even without docker.
I’m more on the cloud native app development myself so lamp and similar stacks are not in my tool belt. But for more complex projects I can certainly understand the need for config/lib/package management but just Docker is over qualified :)

Collapse
 
raythurnevoid profile image
Ray Thurne Void

In some cases you are forced to because local development is worst, and docker still is better than a VM. Yet I agree that up to today, after 10 years is still not an enjoyable way to work. However this is a very good improvement, I wonder what are the performance on Windows while storing the files in the Windows file system, poor fs performance with mount are the reason why I'm currently using a local environment instead of the docker env in my workflow

Collapse
 
nexovec profile image
nexovec

My apps at the very least rely on a database, a monitoring solution, an event bus and a k-v store. It's wildly impractical to run all of that on my machine, as I often need a way to tear down the whole solution. I only develop simple scripts outside of docker, because the control you get by running in a container is immense. It also allows you to quickly run your program with different configurations, possibly at the same time.

As for a company, it has allowed us to make a clear "must run on docker compose up" policy. There are no discussions about whether a thing will run anymore.

Collapse
 
kehoecj profile image
Clayton Kehoe

I’m not a huge fan of it either - VSCode dev containers helps but I still like using a local dev environment

Collapse
 
patroza profile image
Patrick Roza • Edited

If you need to use dev containers for remote development for some reason it’s great. But this is imo not the same like how people usually develop locally with docker.
If your whole editor server and files are in docker for remote work or so, it’s a different story. Instead of in a vm you are in a container; fair game.

Local non docker experience on a good machine is still unbeat for experience I agree. But it’s a trade of. You can develop with just your phone, tablet or ultra light laptop, and remote dev.

Collapse
 
code42cate profile image
Jonas Scholz

No need to believe, a lot of people do that :D. Why do you think it's hard to believe?

Collapse
 
patroza profile image
Patrick Roza • Edited

Because it’s nonsense. It’s a great way to deploy and run apps. For development you just increase complexity, resource hog and come up with workarounds for workarounds like watch instead of volumes etc. Rebuilding containers for a package change? They're solutions to problems that shouldn't exist in the first place.

For development if it’s about reproducable environments there are better alternatives like various env managers or nix without the complexities of Docker.

For external dependencies like SQL servers, it's perhaps another topic, though I usually swap them out for in-process versions; they're faster, less complex, more transparent, and again; don't need Docker. Another option is online services, but of course using Docker or by internalising external dependencies during development you also remove dependence on stable internet.
But I suppose that's just a matter of taste :)

For running some e2e test suite e.g on CI, containers are also a great solution.
Preferably you just re-use the built container you also deploy.

Thread Thread
 
code42cate profile image
Jonas Scholz

Seems like we work on different projects/teams then. I just had a project with ~5 weird external databases/applications that all had to run during development, with a team that was on macOS, windows, and linux. Docker saved my life there

Thread Thread
 
patroza profile image
Patrick Roza

Why did you add 5 weird dependencies? :)

Thread Thread
 
entrptaher profile image
Md Abu Taher

I had a project that was developed 4 years ago. All dependencies are out of date, not getting installed on the latest ubuntu or windows.

Docker just saved me hours trying to figure out exact version and libraries for that project.

Docker is saving teams. Nix and other stuff are great, except not everyone has luxury for using nix, while docker is almost everywhere. From vscode to git codespace to devcontainers and so on.

It also makes onboarding devs to devops easier.

Docker has its pain but saving team comes first. 😊

Thread Thread
 
patroza profile image
Patrick Roza

That’s great. Not saying there aren’t valid cases.

Collapse
 
khiman profile image
Khiman Louer

I think you are not seeing the whole picture here, or do not have experience with many products. Using docker for development allows you to version a single dev environment setup file, no shenanigans to please X who prefers to work with Arch or Y who actually likes Windows.
You get a reproductible environment much, much more easily than on bare metal. This gets even more obvious as your product has more external dependencies such as multiple services and databases.
Yes, it's probably more performant to run everything bare metal, but every developer has to spend time to maintain their setup, and when something breaks, they are more or less on their own wasting time to try and fix it.

Collapse
 
patroza profile image
Patrick Roza • Edited

I certainly see the picture and have years of experience with Docker in complex environments.

Docker is a great tool but as all great tools it has the tendency of being overused, and for most of the benefits docker brings for local development, there are other tools that bring similar benefits but without the complexities and resource hog of docker especially outside of linux.

You could still run your external services in docker if you have to, or run them on some shared infra. But for your own apps, use an environment manager, generally you don’t need docker

Collapse
 
Sloan, the sloth mascot
Comment deleted
Collapse
 
wintercounter profile image
Victor Vincent

Let's store node_modules twice on our machines!

Jokes aside, it's nice to have, but I'll probably still stick to volumes, especially on large projects. I can't see how any watcher can be sufficient enough on large codebases. There's a reason all tools exclude 3rd party lib folders, like node_modules from their watchers.

Collapse
 
code42cate profile image
Jonas Scholz

node_modules are excluded from the watch if they are in .gitignore :)

Collapse
 
wintercounter profile image
Victor Vincent

So that's even worse, because I need to install both places anyway to make the IDE work.

Thread Thread
 
code42cate profile image
Jonas Scholz

Well sure, but with a fast internet connection and enough disk space that isn't really a concern for me luckily:)

Thread Thread
 
wintercounter profile image
Victor Vincent

Well, enough is quite relative. I have 4TB+ SSD RAID and soon I need to extend again. 60% of space are node_modules :D

Thread Thread
 
code42cate profile image
Jonas Scholz

Lmao that is impressive

Collapse
 
paulrobello profile image
Paul Robello

Docker for my team is a lifesaver. When you have developers running on Windows, Linux, Mac, x86 / arm architectures. The emulation provided by doctor as well as the ability to package the needed binaries and environments is a must. Many projects have many dependencies that you don’t want to pollute your development machine with. Or cause cross contamination between different projects. I personally have had great success, using host volumes to mount my code into a container for real time changes both inside and outside of the container. I have created a pretty robust docker dev tool GDC

linked in article

Collapse
 
code42cate profile image
Jonas Scholz

What is your 2 sentence pitch for GDC? I'm having a hard time exactly understanding what it does from the GitHub README 👀

Collapse
 
paulrobello profile image
Paul Robello • Edited

My Linked in post in the article sums it up in a bit more than 2 sentences, but let me see if I can condense it a bit more.
At its core It is an IDE agnostic tool which allows you run your IDE locally to edit / debug code that runs inside a container. The GDC works with Windows / Mac / Linux, x86 / arm and many popular languages such as JS/TS, Python, Java, Dot Net, PHP etc.

Collapse
 
proteusiq profile image
Prayson Wilfred Daniel

I have been waiting for this for a long time. When teaching Machine Learning, I have opted to use containers to avoid time wasted installing tools on students' PCs.

I disliked mounting, as it was not sufficient, and did not have restart and reinstallation. These will solve 70% of my issues when developing and teaching Python + ML.

Thank you for sharing 🙏🏾

Collapse
 
code42cate profile image
Jonas Scholz

Ohh thats so cool! Hope this works out for you:)

Collapse
 
corentingosselin profile image
Corentin Gosselin

Amazing new feature, thank you for sharing ! Your monorepo example is exactly my setup right now

Collapse
 
code42cate profile image
Jonas Scholz

Nice! I think it's one of the most productive setups I've ever had. Anything that you changed to make it even better? 👀

Collapse
 
corentingosselin profile image
Corentin Gosselin

I would not place the dockerfiles in root of the project. I usually place them inside their associated app path: like apps/frontend/my-app/docker/dockerfiles or /apps/backend/my-api/docker/dockerfiles. This way if you have multiple apps you keep it structured :)

Thread Thread
 
code42cate profile image
Jonas Scholz

Yeah, I used to do that as well but stopped for some very specific reason that I don't even remember anymore lol. At this point I'm just used to it and with less than 5 apps it's still manageable :)

Thread Thread
 
code42cate profile image
Jonas Scholz

I think some hosting platform I used didn't really support that when I started with monorepos? Really no reason to still do that I guess x)

Collapse
 
matrixnh profile image
Nadir Hamid

Great read. This is definitely better than using volumes. They were a pain to manage and annoying to integrate with app workflows. Indeed, each framework has a unique set of development challenges and there is no one size fits all, but this new feature helps. It would be good to use some of these rebuild actions in case a basic watch procedure doesn't work.

I hope other dev teams find this soon as it can save them large amounts of time.

Collapse
 
jeremymoorecom profile image
Jeremy Moore

Thank you for sharing! Looks like most of my questions are answered in the docs.

Collapse
 
code42cate profile image
Jonas Scholz

Thanks :) Anything that you think I missed that I should've included?

Collapse
 
pradumnasaraf profile image
Pradumna Saraf

Compose watch is an amazing addition. There was always an issue with the hot reloading using volumes.

Collapse
 
flyingcodemonkeys profile image
flyingCodeMonkeys

so much hot sexy goodness! save me tons of headaches. holy cow this is good. 😀

Collapse
 
m0n0x41d profile image
Ivan Zakutnii

It is so neat, thank you for sharing this news!

Collapse
 
code42cate profile image
Jonas Scholz

Glad you liked it!

Collapse
 
pftg profile image
Paul Keen

Why do you need to add your code to the container? When would you need that instead of mount? I have not see dev containers with adding code to the image.

Collapse
 
pftg profile image
Paul Keen

You can find robust version to prevent hacks with watch here jtway.co/running-tests-in-containe...

Collapse
 
eevargas profile image
Eli Vargas

I'm guessing volumes are a problem for non-Linux users. I have no issues with Docker volumes. But non-linux Docker actually runs on a VM. So I can see that could cause problems with hot reloads. 🤔 Good luck guys. 😅

Collapse
 
joolsmcfly profile image
Julien Dephix

Same here.
After reading the title I was wondering what was wrong with volumes but I'm running Ubuntu so that could be why I have no problems: yarn watch and code! ^_^

Collapse
 
namdevgg profile image
namdevgg

Goood

Collapse
 
cabrel92 profile image
cabrel92

Thanks for your Post.
Nowadays, Developer shall think of working with docker, I personally try to dockerise every application I work on. I have stopped working with vagrant for a while, because of ressources consumption, and many other things.
So for me docker is the most for Dev.
Nos that I can use watch ( sync/rebuild/ sync+), now that I have opportunity ti share my Container with colleagues, I feel more comfortable.

Collapse
 
anhvandev profile image
anhvandev

Hopefully there will be a feature to dispatch multiple commands after changing package settings :v it will look like a CD tool in the local environment.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.