DEV Community

Cover image for Learn Docker - from the beginning, part II volumes
Chris Noring for Microsoft Azure

Posted on • Originally published at softchris.github.io on

Learn Docker - from the beginning, part II volumes

Follow me on Twitter, happy to take your suggestions on topics or improvements /Chris

This article is part of a series:

  • Docker — from the beginning part I, This covers why Docker and the basic concepts such containers, images and Dockerfile and of course the commands you need to manage them.
  • Docker — from the beginning, Part II, we are here
  • Docker — from the beginning, Part III, this is about how to deal with Databases, putting them into containers and how to make containers talk to other containers using legacy linking but also the new standard through networks
  • Docker — from the beginning, Part IV, this is how we manage more than one service using Docker Compose ( this is 1/2 part on Docker Compose)
  • Docker - from the beginning, Part V, this part is the second and concluding part on Docker Compose where we cover Volumes, Environment Variables and working with Databases and Networks

Welcome to the second part of this series about Docker. Hopefully, you have read the first part to gain some basic understanding of Dockers core concepts and its basic commands or you have acquired that knowledge elsewhere.

In this article, we will attempt to cover the following topics

  • recap and problem introduction , let’s recap on the lessons learned from part I and let’s try to describe how not using a volume can be quite painful
  • persist data , we can use Volumes to persist files we create or Databases that we change ( e.g Sqllite).
  • turning our workdir into a volume , Volumes also give us a great way to work with our application without having to set up and tear down the container for every change.

Resources

Using Docker and containerization is about breaking apart a monolith into microservices. Throughout this series, we will learn to master Docker and all its commands. Sooner or later you will want to take your containers to a production environment. That environment is usually the Cloud. When you feel you've got enough Docker experience have a look at these links to see how Docker can be used in the Cloud as well:

  • Containers in the Cloud Great overview page that shows what else there is to know about containers in the Cloud
  • Deploying your containers in the Cloud Tutorial that shows how easy it is to leverage your existing Docker skill and get your services running in the Cloud
  • Creating a container registry Your Docker images can be in Docker Hub but also in a Container Registry in the Cloud. Wouldn't it be great to store your images somewhere and actually be able to create a service from that Registry in a matter of minutes?

Recap and the problem of not using a volume

Ok, so we will keep working on the application we created in the first part of this series, that is a Node.js application with the library express installed.

We will do the following in this section:

  • run a container, we will start a container and thereby repeat some basic Docker commands we learned in the first part of this series
  • update our app, update our source code and start and stop a container and realize why this way of working is quite painful

Run a container

As our application grows we might want to do add routes to it or change what is rendered on a specific route. Let’s show the source code we have so far:

// app.js

const express = require('express')

const app = express()

const port = process.env.PORT

app.get('/', (req, res) => res.send('Hello World!'))

app.listen(port, () => console.log(`Example app listening on port ${port}!`))
Enter fullscreen mode Exit fullscreen mode

Now let’s see if we remember our basic commands. Let’s type:

docker ps

Ok, that looks empty. So we cleaned up last time with docker stop or docker kill , regardless of what we used we don’t have a container that we can start, so we need to build one. Let’s have a look at what images we have:

docker images

Ok, so we have our image there, let’s create and run a container:

docker run -d -p 8000:3000 chrisnoring/node

That should lead to a container up and running at port 8000 and it should run in detached mode, thanks to us specifying the -d flag.

We get a container ID above, good. Let’s see if we can find our application at http://localhost:8000:

Ok, good there it is. Now we are ready for the next step which is to update our source code.

Update our app

Let’s start by changing the default route to render out hello Chris , that is add the following line:

app.get('/', (req, res) => res.send('Hello Chris!'))
Enter fullscreen mode Exit fullscreen mode

Ok, so we save our change and we head back to the browser and we notice it is still saying Hello World. It seems the container is not reflecting our changes. For that to happen we need to bring down the container, remove it, rebuild the image and then run the container again. Because we need to carry out a whole host of commands, we will need to change how we build and run our container namely by actively giving it a name, so instead of running the container like so:

docker run -d -p 8000:3000 chrisnoring/node

We now type:

docker run -d -p 8000:3000 --name my-container chrisnoring/node

This means our container will get the name my-container and it also means that when we refer to our container we can now use its name instead of its container ID, which for our scenario is better as the container ID will change for every setup and tear down.

docker stop my-container // this will stop the container, it can still be started if we want to

docker rm my-container // this will remove the container completely

docker build -t chrisnoring/node . // creates an image

docker run -d -p 8000:3000 --name my-container chrisnoring/node
Enter fullscreen mode Exit fullscreen mode

You can chain these commands to look like this:

docker stop my-container && docker rm my-container && docker build -t chrisnoring/node . && docker run -d -p 8000:3000 --name my-container chrisnoring/node
Enter fullscreen mode Exit fullscreen mode

My first seeing thought seeing that is WOW, that’s a lot of commands. There has got to be a better way right, especially when I’m in the development phase?

Well yes, there is a better way, using a volume. So let’s look at volumes next.

Using a volume

Volumes or data volumes is a way for us to create a place in the host machine where we can write files so they are persisted. Why would we want that? Well, when we are under development we might need to put the application in a certain state so we don’t have to start from the beginning. Typically we would want to store things like log files, JSON files and perhaps even databases (SQLite ) on a volume.

It’s quite easy to create a volume and we can do so in many different ways, but mainly there are two ways:

  • before you create a container
  • lazily, e.g while creating the container

Creating and managing a volume

To create a volume you type the following:

docker volume create [name of volume]

we can verify that our volume was created by typing:

docker volume ls

This will list all the different volumes we have. Now, this will after a while lead to you having tons of volumes created so it’s good to know how to keep down the number of volumes. For that you can type:

docker volume prune

This will remove all the volumes you currently are not using. You will be given a question if you want to proceed.

If you want to remove a single volume you can do so by typing:

docker volume rm [name of volume]

Another command you most likely will want to know about is the inspect command that allows us to see more details on our created volume and probably most important where it will place the persisted files.

docker inspect [name of volume]

A comment on this though is that most of the time you might not care where Docker place these files but sometimes you would want to know due to debugging purposes. As we will see later in this section controlling where files are persisted can work to our advantage when we develop our application.

As you can see the Mountpoint field is telling us where Docker is planning to persist your files.

Mounting a volume in your application

Ok, so we have come to the point that we want to use our volume in an application. We want to be able to change or create files in our container so that when we pull it down and start it up again our changes will still be there.

For this we can use two different commands that achieve relatively the same thing with a different syntax, those are:

  • -v, —-volume, the syntax looks like the following -v [name of volume]:[directory in the container], for example -v my-volume:/app
  • --mount, the syntax looks like the following--mount source=[name of volume],target=[directory in container] , for example —-mount source=my-volume,target=/app

Used in conjuncture with running a container it would look like this for example:

docker run -d -p 8000:3000 --name my-container --volume my-volume:/logs chrisnoring/node

Let’s try this out. First off let’s run our container:

Then let’s run our inspect command to ensure our volume has been correctly mounted inside of our container. When we run said command we get a giant JSON output but we are looking for the Mounts property:

Ok, our volume is there, good. Next step is to locate our volume inside of our container. Let’s get into our container with:

docker exec -it my-container bash

and thereafter navigate to our /logs directory:

Ok, now if we bring down our container everything we created in our volume should be persisted and everything that is not placed in the volume should be gone right? Yep, that’s the idea. Good, we understand the principle of volumes.

Mounting a subdirectory as a volume

So far we have been creating a volume and have let Docker decide on where the files are being persisted. What happens if we decide where these files are persisted?

Well if we point to a directory on our hard drive it will not only look at that directory and place files there but it will pick the pre-existing files that are in there and bring them into our mount point in the container. Let’s do the following to demonstrate what I mean:

  • create a directory, let’s create a directory /logs
  • create a file, let’s create a file logs.txt and write some text in it
  • run our container, let’s create a mount point to our local directory + /logs

The first two commands lead to us having a file structure like so:

app.js
Dockerfile
/logs
 logs.txt // contains 'logging host...'
package.json
package-lock.json
Enter fullscreen mode Exit fullscreen mode

Now for the run command to get our container up and running:

Above we observe that our --volume command looks a bit different. The first argument is $(pwd)/logs which means our current working directory and the subdirectory logs. The second argument is /logs which means we are saying mount our host computers logs directory to a directory with the same name in the container.

Let’s dive into the container and establish that the container has indeed pulled in the files from our host computers logs directory:

As you we can see from the above set of commands we go into the container with docker exec -it my-container bash and then we proceed to navigate ourselves to the logs directory and finally we read out the content of logs.txt with the command cat logs.txt. The result is logging host... e.g the exact file and content that we have on the host computer.

But this is a volume however which means there is a connection between the volume in the host computer and the container. Let’s edit the file next on the host computer and see what happens to the container:

Wow, it changed in the container as well without us having to tear it down or restarting it.

Treating our application as a volume

To make our whole application be treated as a volume we need to tear down the container like so:

docker kill my-container && docker rm my-container

Why do we need to do all that? Well, we are about to change the Dockerfile as well as the source code and our container won’t pick up these changes, unless we use a Volume, like I am about to show you below.

Thereafter we need to rerun our container this time with a different volume argument namely --volume $(PWD):/app.

NOTE, if your PWD consists of a directory with space in it you might need to specify the argument as "$(PWD)":/app instead, i.e we need to surround $(PWD) with double quotes. Thank you to Vitaly for pointing that out :)

The full command looks like this:

This will effectively make our entire app directory a volume and every time we change something in there our container should reflect the changes.

So let’s try adding a route in our Node.js Express application like so:

app.get("/docker", (req, res) => {

  res.send("hello from docker");

});
Enter fullscreen mode Exit fullscreen mode

Ok, so from what we know from dealing with the express library we should be able to reach http://localhost:8000/docker in our browser or?

Sad face :(. It didn’t work, what did we do wrong? Well here is the thing. If you change the source in a Node.js Express application you need to restart it. This means that we need to take a step back and think how can we restart our Node.js Express web server as soon as there is a file change. There are several ways to accomplish this like for example:

  • install a library like nodemon or forever that restarts the web server
  • run a PKILL command and kill the running node.js process and the run node app.js

It feels a little less cumbersome to just install a library like nodemon so let’s do that:

This means we now have another library dependency in package.json but it means we will need to change how we start our app. We need to start our app using the command nodemon app.js. This means nodemon will take care of the whole restart as soon as there is a change. While we are at it let’s add a start script to package.json, after all, that is the more Node.js -ish way of doing things:

Let's describe what we did above, in case you are new to Node.js. Adding a start script to a package.json file means we go into a section called "scripts" and we add an entry start, like so:

// excerpt package.json
"scripts": {
  "start": "nodemon app.js"
}

Enter fullscreen mode Exit fullscreen mode

By default a command defined in "scripts" is run by you typing npm run [name of command]. There are however known commands, like start and test and with known commands we can omit the keyword run, so instead of typing npm run start, we can type npm start. Let's add another command "log" like so:

// excerpt package.json

"scripts": {
  "start": "nodemon app.js",
  "log": "echo \"Logging something to screen\""
}
Enter fullscreen mode Exit fullscreen mode

To run this new command "log" we would type npm run log.

Ok, one thing remains though and that is changing the Dockerfile to change how it starts our app. We only need to change the last line from:

ENTRYPOINT ["node", "app.js"]
Enter fullscreen mode Exit fullscreen mode

to

ENTRYPOINT ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

Because we changed the Dockerfile this leads to us having to rebuild the image. So let’s do that:

docker build -t chrisnoring/node .

Ok, the next step is to bring up our container:

docker run -d -p 8000:3000 --name my-container --volume $(PWD):/app chrisnoring/node

Worth noting is how we expose the entire directory we are currently standing in and mapping that to /app inside the container.

Because we’ve already added the /docker route we need to add a new one, like so:

app.get('/nodemon', (req, res) => res.send('hello from nodemon'))
Enter fullscreen mode Exit fullscreen mode

Now we hope that nodemon has done it’s part when we save our change in app.js :

Aaaand, we have a winner. It works to route to /nodemon . I don’t know about you but the first time I got this to work this was me:

Summary

This has brought us to the end of our article. We have learned about Volumes which is quite a cool and useful feature and more importantly I’ve shown how you can turn your whole development environment into a volume and keep working on your source code without having to restart the container.

In the third part of our series, we will be covering how to work linked containers and databases. So stay tuned.

Follow me on Twitter, happy to take your suggestions on topics or improvements /Chris


Top comments (35)

Collapse
 
neskhodovskiy profile image
Serhiy Neskhodovskiy • Edited

Hello Chris, thanks for the tutorial. I just noticed you didn't say anything about removing the "COPY . ." line from the Dockerfile after you have linked the volume. I assume there's no longer a need to copy the app into the container. Some would say "it's obvious" and some would say "who cares" since the volume is mounted later and overrides the directory anyway, but in my opinion a note on how to avoid redundant operations and keep things clean will make a nice addition to an otherwise great article.

Collapse
 
jplindstrom profile image
Johan Lindstrom

Do you not need the COPY . . for the docker build step? The volume is only mounted during docker run.

Collapse
 
thomasroest profile image
Thomas Roest • Edited

what about npm install in the Dockerfile? You don't need that either right? Isn't it a better idea to only mount a src directory?

Collapse
 
orphee profile image
orphee

As I'm running the app through DockerToolbox, I had to add the flag -L to nodemon, otherwise the listening part was not working

"scripts": {
    "start": "nodemon -L app.js",
Collapse
 
softchris profile image
Chris Noring

Thanks so much.. I must admit I haven't used Docker on Windows so it's great you are able to point out differences :)

Collapse
 
mrxcitement profile image
Mike Barker

This also applies to "Docker Desktop" running on macOS as well. More info here: github.com/remy/nodemon#applicatio...

Collapse
 
sygyzmundovych profile image
Vitaliy Rudnytskiy

Hi Chris. Thanks for the effort to write these nice tutorials. I'm not completely new to Docker, but still learned some new tricks :)

One additional suggestion is to call the command with volume option using " around $(pwd), like:
$ docker run -d -p 8000:3000 --name my-container --volume "$(PWD)":/app image-name

In my case the command from the tutorial was throwing an error

$ docker run -d -p 8000:3000 --name my-container --volume $(PWD):/app image-name
docker: invalid reference format.
See 'docker run --help'.

because the directory path had spaces in it. Do not ask me why, pls ;-)

Regards,
-Vitaliy

Collapse
 
softchris profile image
Chris Noring

oh wow.. Great tip Vitaly thanks.. I'll update the article :)

Collapse
 
goranpaunovic profile image
Goran Paunović

If you are on windows and using powershell, change $(pwd) to ${pwd}.

Collapse
 
tssidhu profile image
tssidhu • Edited

Very helpful article and thanks for taking time to put it together. I had a question about the last docker run command. Shouldn't that include an image name at the end? The version I see currently is:

docker run -d -p 8000:3000 --name my-container --volume $(PWD):/app
Enter fullscreen mode Exit fullscreen mode

But, when I use that in my machine, I get "docker run" requires at least 1 argument. Only way I was able to fix it was by adding the image name at the end.

docker run -d -p 8000:3000 --name my-container --volume "%cd%":/app chrisnoring/node
Enter fullscreen mode Exit fullscreen mode

NOTE: "%cd%" is being used instead of $(PWD) since it's a windows machine

Collapse
 
denniswebdel profile image
dennisFS

also worth adding that if you are on windows using git bash the path conversion gets messy so that command substituition needs to be escaped like this:
~> docker run -d -p 8000:3000 --name EXAMPLE --volume /$(pwd)/logs:/logs YOUR_IMAGE

Collapse
 
bijoy26 profile image
Anjum Rashid

For me (in windows git bash), I also had to wrap around with "" to make it work.

$ docker run -d -p 8000:3000 --name YOU_NAME --volume /"$(pwd)"/logs:/logs YOUR_IMAGE

Collapse
 
softchris profile image
Chris Noring

Hey. You are completely right. Sorry, you had to lose time over this and thank you for posting this correction, I've updated the article.

Collapse
 
jayywalker profile image
Jordan Walker • Edited

Hi Chris, I really appreciate that you've taken the time to produce these wonderful tutorials. I've learnt so much covering this tutorial during the Easter break.

I had a little problem I came across which I felt I should point out for other devlings hoping to learn Docker. In networked environments, sometimes nodemon doesn't restart, which was the case for myself. To fix this, use nodemon -L app.js rather than nodemon app.js as your start script.

EDIT: just realised there was another comment pointing this out too. Oh well, the first paragraph counts :D

Collapse
 
softchris profile image
Chris Noring

hi Jordan. Appreciate your comment, happy it was useful :) Let me know if there is anything I can do :)

Collapse
 
lingtalfi2 profile image
lafitte pierre

cool

Collapse
 
amirdamirov profile image
amirdamirov

Hi,

I added new lines to package.json but when i try to build image it givem me next errors:

npm ERR! code EJSONPARSE
npm ERR! file /app/package.json
npm ERR! JSON.parse Failed to parse json
npm ERR! JSON.parse Unexpected string in JSON at position 162 while parsing '{
npm ERR! JSON.parse "name": "node",
npm ERR! JSON.parse "version": "1.0.0"'
npm ERR! JSON.parse Failed to parse package.json data.
npm ERR! JSON.parse package.json must be actual JSON, not just JavaScript.

npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2019-09-11T07_41_58_281Z-debug.log

This my package.json file :

{
"name": "node",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
"start": "nodemon app.js"
"log": "echo \"Logging something to screen\""
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"express": "4.17.1"
},
"devDependencies": {
"nodemon": "1.19.2"
}
}

Collapse
 
softchris profile image
Chris Noring

looks like you are missing comma , between your tasks in scripts

Collapse
 
amirdamirov profile image
amirdamirov

First Thanks for detailed articles, its really helpful.
Second thanks for quick response =)
I will check it.

Thread Thread
 
softchris profile image
Chris Noring

thanks, happy to hear that :)

Collapse
 
adtm profile image
Tomas Eglinskas

Awesome tutorials! I'll be a pro after all the series! 😂

Collapse
 
jplindstrom profile image
Johan Lindstrom

Very helpful article, thanks!

Collapse
 
tduvally profile image
tduvally

Thank you for this guide, but I'm finding that your examples just aren't consistent. You aren't using the same values for the things, which means the commands error out.

Collapse
 
softchris profile image
Chris Noring

hi.. please tell me which command is erroring out so I can fix it?

Collapse
 
tduvally profile image
tduvally • Edited

Hi Chris, yes:

I don't know node.js, so I had to guess at the fix for the first error, but it looks like just adding the "Hello Chris" route doesn't work. The "=\>" should be "=>" and I had to remove the existing app.get line, since that conflicted from Part I.

After that I got stuck at adding the docker get.app line as well, as that was completely different. I modified it and got it working.

You go between showing the code to run in the text and in screenshots. In one you use "--volume my-volume:/logs", but in the other you use "--volume logs:/logs"

Also, it looks like the comma in the "start" line in package.json causes the build to fail.

Thread Thread
 
softchris profile image
Chris Noring

about the \ they were, unfortunately, introduced when this post was imported from my medium account.. sorry to hear you were struggling with it ( removed now ). As for being different in screenshots and text, let me see if I can take new screenshots. Appreciate you taking the time to tell me these things. Btw would you benefit from a video version of this tutorial?

Thread Thread
 
softchris profile image
Chris Noring • Edited

I've now added a section to explain scripts a bit in Node.js, hopefully, that clears up any confusion, it start withs : Let's describe what we did above, in case you are new to Node.js. Adding a start script to a package.json file means we go into a section called "scripts" and we add an entry start, like so...

Collapse
 
december1981 profile image
Stephen Brown

Volumes are convenient and persistence is a blessing ... except when they're not. We moved to using Docker swarm the other day, and realized that vanilla volumes are not replicated across a swarm. Rather, if a service changes its node and ends up on a different host, we also end up with a new volume (or an outdated one from the last time the container instance lit up on that node). Yes, with volumes you can use different driver types, like NFS (or Glusterfs if you want to be fancy) ... but it gets messy. So we ended up just using fixed database services -the main thing for us requiring persistent volumes- external to our swarm.

Collapse
 
moinuddin14 profile image
Khaja Moinuddin Mohammed

Thanks @chris for the wonderful series. I am facing the following issue, running from Mac OS X
I was able to establish the connection between host folder (logs) with container folder (logs) and able to see the contents for the first time. But then when i make the changes in the host machine for the same logs.txt file its not reflecting inside the container. I ain't sure if its with permissions, if it was with permissions, i believe it might not have shown the content of host machine file in the first instance itself. Appreciate if you can help with this.

Collapse
 
moinuddin14 profile image
Khaja Moinuddin Mohammed

It's working now, i don't know how. But, its working :)

Collapse
 
softchris profile image
Chris Noring

glad to hear that

Collapse
 
heet1996 profile image
Heet Shah • Edited

Hi All,

I don't know why but I am facing some issues when I run below script:

docker run -d -p 3000:3000 --name my-container --volume "%cd%":/app/usr 5b17ceeeae0b

Please refer image for error.

dev-to-uploads.s3.amazonaws.com/i/...

Thank You.

Collapse
 
softchris profile image
Chris Noring

hi Pablo.. I was trying to explain how you start out with node app.js. Then you add a volume and you can change files locally and same change happens inside of container. At that point the route is not changing because we haven't restarted our web server. So we replace node app.js with nodemon app.js, rebuild our image and container and now when I change locally, the change happens in container too aaand nodemon ensures web server is restarted. I changed in the text above but I hope my added explanation here made the scenario clearer?