loading...
Microsoft Azure

Learn Docker - from the beginning, part I images and containers

softchris profile image Chris Noring Originally published at softchris.github.io Updated on ・15 min read

Follow me on Twitter, happy to take your suggestions on topics or improvements /Chris

This article is part of a series:

  • Docker — from the beginning part I, we are here.
  • Docker — from the beginning, Part II, this is about Volumes and how we can use volumes to persist data but also how we can turn our development environment into a Volume and make our development experience considerably better
  • Docker — from the beginning, Part III, this is about how to deal with Databases, putting them into containers and how to make containers talk to other containers using legacy linking but also the new standard through networks
  • Docker — from the beginning, Part IV, this is how we manage more than one service using Docker Compose ( this is 1/2 part on Docker Compose)
  • Docker - from the beginning, Part V, this part is the second and concluding part on Docker Compose where we cover Volumes, Environment Variables and working with Databases and Networks

Now there are a ton of articles out there for Docker but I struggle with the fact that none of them are really thorough and explains what goes on, or rather that’s my impression, feel free to disagree :). I should say I’m writing a lot of these articles for me and my own understanding and to have fun in the process :). I also hope that it can be useful for you as well.

So I decided to dig relatively deep so that you all hopefully might benefit. TLDR, this is the first part in a series of articles on Docker, this part explains the basics and the reason I think you should use Docker.

This article really is Docker from the beginning, I assume no pre-knowledge, I assume nothing. Enjoy :)

alt text

In this article, we will attempt to cover the following topics

  • Why Docker and what is it, this is probably the most important part of the article, why Docker, why not some other technology or status quo? I will attempt to explain what Docker is and what it consists of.
  • Docker in action, we will dockerize an application to showcase we understand and can use the core concepts that make out Docker.
  • Improving our set up, we should ensure our solution does not rely on static values. We can ensure this by creating and setting environment variables whose value we can read from inside of our application.
  • Managing our container, now it’s fairly easy to get a container up and running but let’s look how to manage it, after all, we don’t want the container to be up and running forever. Even it’s a very lightweight thing, it adds up and it can block ports that you want to use for other things.

Remember that this is the first part of a series and that we will look into other things in this series on Docker such as Volumes, Linking, Micro Services, and Orchestration, but that will be covered in future parts.

Resources

Using Docker and containerization is about breaking apart a monolith into microservices. Throughout this series, we will learn to master Docker and all its commands. Sooner or later you will want to take your containers to a production environment. That environment is usually the Cloud. When you feel you've got enough Docker experience have a look at these links to see how Docker can be used in the Cloud as well:

  • Sign up for a free Azure account To use containers in the Cloud like a private registry you will need a free Azure account
  • Containers in the Cloud Great overview page that shows what else there is to know about containers in the Cloud
  • Deploying your containers in the Cloud Tutorial that shows how easy it is to leverage your existing Docker skill and get your services running in the Cloud
  • Creating a container registry Your Docker images can be in Docker Hub but also in a Container Registry in the Cloud. Wouldn't it be great to store your images somewhere and actually be able to create a service from that Registry in a matter of minutes?

Why Docker and what is it

Docker helps you create a reproducible environment. You can specify the exact version of different libraries, different environment variables and their values among other things. Most importantly you can run your application in isolation inside of that environment.

The big question is why we would want that?

  • onboarding, every time you onboard a new developer in a project they need to set up a lot of things like installing SDKs, development tools, databases, add permissions and so on. This a process that can take from one day to up to 2 weeks
  • environments look the same, using Docker you can create a DEV, STAGING as well as PRODUCTION environment that all look the same. That is really great as before Docker/containerization you could have environments that were similar but there might have been small differences and when you discovered a bug you could spend a lot of time chasing down the root cause of the bug. Sometimes the bug was in the source code itself but sometimes it was due to some difference in the environment and that usually took a long time to determine.
  • works on my machine, this point is much like the above but because Docker creates these isolated containers, where you specify exactly what they should contain, you can also ship these containers to the customers and they will operate in the exact same way as they did on your development machine/s.

What is it

Ok, so we’ve mentioned some great reasons above why you should investigate Docker but let's dive more into what Docker is. We’ve established that it lets us specify an environment like the OS, how to find and run the apps and the variables you need, but what else is there to know about Docker?

Docker creates stand-alone packages called containers that contain everything that is needed for you to run your application. Each container gets its own CPU, memory and network resources and does not depend on a specific operating system or kernel. The first that comes to mind when I describe the above is a Virtual Machine, but Docker differs in how it shares or dedicates resources. Docker uses a so-called layered file system which enables the containers to share common parts and the end result is that containers are way less of resource-hog on the host system than a virtual machine.

In short, the Docker containers, contain everything you need to run an application, including the source code you wrote. Containers are also isolated and secure light-weight units on your system. This makes it easy to create multiple micro-services that are written in different programming languages and that are using different versions of the same lib and even the same OS.

If you are curious about how exactly Docker does this I urge to have a look at the following links on layered file system and the library runc and also this great wikipedia overview of Docker.

Docker in action

Ok, so we covered what Docker is and some benefits. We also understood that the thing that eventually runs my application is called a container. But how do we get there? Well, we start out with a description file, called a Dockerfile. In this Dockerfile, we specify everything we need in terms of OS, environment variables and how to get our application in there.

Now we will jump in at the deep end. We will build an app and Dockerize it, so we will have our app running inside of a container, isolated from the outside world but reachable on ports that we explicitly open up.

We will take the following steps:

  • create an application, we will create a Node.js Express application, which will act as a REST API.
  • create a Dockerfile, a text file that tells Docker how to build our application
  • build an image, the pre-step to having our application up and running is to first create a so-called Docker image
  • create a container, this is the final step in which we will see our app up and running, we will create a container from a Docker image

Creating our app

We will now create an Express Node.js project and it will consist of the following files:

  • app.js, this is the file that spins up our REST
  • package.json, this is the manifest file for the project, here we will see all the dependencies like express but we will also declare a script start so we can easily start our application
  • Dockerfile, this is a file we will create to tell Docker how to Dockerize our application

To generate our package.json we just place ourselves in the projects directory and type:

npm init -y

This will produce the package.json file with a bunch of default values.

Then we should add the dependencies we are about to use, which is the library express , we install it by typing like this:

npm install express —-save

Let’s add some code

Now when we have done all the prework with generating a package.json file and installing dependencies, it’s time to add the code needed for our application to run, so we add the following code to app.js:

// app.js
const express = require('express')

const app = express()

const port = 3000

app.get('/', (req, res) => res.send('Hello World!'))

app.listen(port, () => console.log(`Example app listening on port ${port}!`))

We can try and run this application by typing:

node app.js

Going to a web browser on http://localhost:3000 we should now see:

Hello world app

Ok so that works, good :)

One little comment though, we should take note of the fact that we are assigning the port to 3000 when we later create our Dockerfile.

Creating a Dockerfile

So the next step is creating our Dockerfile. Now, this file acts as a manifest but also as a build instruction file, how to get our app up and running. Ok, so what is needed to get the app up and running? We need to:

  • copy all app files into the docker container
  • install dependencies like express
  • open up a port, in the container that can be accessed from the outside
  • instruct, the container how to start our app

In a more complex application, we might need to do things like setting environment variables or set credentials for a database or run a database seed to populate the database and so on. For now, we only need the things we specified in our bullet list above. So let’s try to express that in our Dockerfile:

# Dockerfile

FROM node:latest

WORKDIR /app

COPY . .

RUN npm install

EXPOSE 3000

ENTRYPOINT ["node", "app.js"]

Let’s break the above commands down:

  • FROM, this is us selecting an OS image from Docker Hub. Docker Hub is a global repository that contains images that we can pull down locally. In our case we are choosing an image based on Ubuntu that has Node.js installed, it’s called node. We also specify that we want the latest version of it, by using the following tag :latest
  • WORKDIR, this simply means we set a working directory. This is a way to set up for what is to happen later, in the next command below
  • COPY, here we copy the files from the directory we are standing into the directory specified by our WORKDIR command
  • RUN, this runs a command in the terminal, in our case we are installing all the libraries we need to build our Node.js express application
  • EXPOSE, this means we are opening up a port, it is through this port that we communicate with our container
  • ENTRYPOINT, this is where we should state how we start up our application, the commands need to be specified as an array so the array [“node”, “app.js”] will be translated to the node app.js in the terminal

Quick overview

Ok, so now we have created all the files we need for our project and it should look like this:

-| app.js // our express app
-| Dockerfile // our instruction file that Docker will read from
-| node_modules/ // directory created when we run npm install
-| package.json // npm init created this
-| package-lock.json // created when we installed libraries from NPM

Building an image

There are two steps that need to be taken to have our application up and running inside of a container, those are:

  • creating an image, with the help of the Dockerfile and the command docker build we will create an image
  • start the container, now that we have an image from the action we took above we need to create a container

First things first, let’s create our image with the following command:

docker build -t chrisnoring/node:latest .

The above instruction creates an image. The . at the end is important as this instructs Docker and tells it where your Dockerfile is located, in this case, it is the directory you are standing in. If you don’t have the OS image, that we ask for in the FROM command, it will lead to it being pulled down from Docker Hub and then your specific image is being built.

Your terminal should look something like this:

alt text

What we see above is how the OS image node:latest is being pulled down from the Docker Hub and then each of our commands is being executed like WORKDIR, RUN and so on. Worth noting is how it says removing intermediate container after each step. Now, this is Docker being smart and caching all the different file layers after each command so it goes faster. In the end, we see successfully built which is our cue that everything was constructed successfully. Let’s have a look at our image with:

docker images

alt text

We have an image, success :)

Creating a container

Next step is to take our image and construct a container from it. A container is this isolated piece that runs our app inside of it. We build a container using docker run . The full command looks like this:

docker run chrisnoring/node

That’s not really good enough though as we need to map the internal port of the app to an external one, on the host machine. Remember this is an app that we want to reach through our browser. We do the mapping by using the flag -p like so:

-p [external port]:[internal port]

Now the full command now looks like this:

docker run -p 8000:3000 chrisnoring/node

Ok, running this command means we should be able to visit our container by going to http://localhost:8000, 8000 is our external port remember that maps to the internal port 3000. Let’s see, let’s open up a browser:

alt text

There we have it folks, a working container :D

alt text

Improving our set up with Environment Variables

Ok, so we’ve learned how to build our Docker image, we’ve learned how to run a container and thereby our app inside of it. However, we could be handling the part with PORT a bit nicer. Right now we need to keep track of the port we start the express server with, inside of our app.js , to make sure this matches what we write in the Dockerfile. It shouldn’t have to be that way, it’s just static and error-prone.

To fix it we could introduce an environment variable. This means that we need to do two things:

  • add an environment variable to the Dockerfile
  • read from the environment variable in app.js

Add an environment variable

For this we need to use the command ENV, like so:

ENV PORT=3000

Let’s add that to our Dockerfile so it now looks like so:

FROM node:latest

WORKDIR /app

COPY . .

ENV PORT=3000

RUN npm install

EXPOSE 3000

ENTRYPOINT ["node", "app.js"]

Let’s do one more change namely to update EXPOSE to use our variable, so we git rid of static values and rely on variables instead, like so:

FROM node:latest

WORKDIR /app

COPY . .

ENV PORT=3000

RUN npm install

EXPOSE $PORT

ENTRYPOINT ["node", "app.js"]

Note above how we change our EXPOSE command to $PORT, any variables we use needs to be prefixed with a $ character:

EXPOSE $PORT

Read the environment variable value in App.js

We can read values from environment variables in Node.js like so:

process.env.PORT

So let’s update our app.js code to this:

// app.js
const express = require('express')

const app = express()

const port = process.env.PORT

app.get('/', (req, res) => res.send('Hello World!'))

app.listen(port, () => console.log(`Example app listening on port ${port}!`))

NOTE, when we do a change in our app.js or our Dockerfile we need to rebuild our image. That means we need to run the docker build command again and prior to that we need to have torn down our container with docker stop and docker rm. More on that in the upcoming sections.

Managing our container

Ok, so you have just started your container with docker run and you notice that you can’t shut it off in the terminal. Panic sets in ;) At this point you can go to another terminal window and do the following:

docker ps

This will list all running containers, you will be able to see the containers name as well as its id. It should look something like this:

alt text

As you see above we have the column CONTAINER_ID or NAMES column, both these values will work to stop our container, cause that is what we need to do, like so:

docker stop f40

We opt for using CONTAINER_ID and the three first digits, we don’t need more. This will effectively stop our container.

Daemon mode

We can do like we did above and open a separate terminal tab but running it in Daemon mode is a better option. This means that we run the container in the background and all output from it will not be visible. To make this happen we simply add the flag -d . Let’s try that out:

alt text

What we get now is just the container id back, that’s all we’re ever going to see. Now it’s easier for us to just stop it if we want, by typing docker stop 268 , that’s the three first digits from the above id.

Interactive mode

Interactive mode is an interesting one, this allows us to step into a running container and list files, or add/remove files or just about anything we can do for example bash. For this, we need the command docker exec, like so:

alt text

Above we run the command:

docker exec -it 268 bash

NOTE, the container needs to be up and running. If you've stopped it previously you should start it with docker start 268. Replace 268 with whatever id you got when it was created when you typed docker run.

268 is the three first digits if our container and -it means interactive mode and our argument bash at the end means we will run a bash shell.

We also run the command ls, once we get the bash shell up and running so that means we can easily list what’s in the container so we can verify we built it correctly but it’s a good way to debug as well.

If we just want to run something on the container like a node command, for example, we can type:

docker exec 268 node app.js

that will run the command node app.js in the container

Docker kill vs Docker stop

So far, we have been using docker stop as way to stop the container. There is another way of stopping the container namely docker kill , so what is the difference?

  • docker stop, this sends the signal SIGTERM followed by SIGKILL after a grace period. In short, this is a way to bring down the container in a more graceful way meaning it gets to release resources and saving state.
  • docker kill, this sends SIGKILL right away. This means resource release or state save might not work as intended. In development, it doesn’t really matter which one of the two commands are being used but in a production scenario it probably wiser to rely on docker stop

Cleaning up

During the course of development you will end up creating tons of container so ensure you clean up by typing:

docker rm id-of-container

Summary

Ok, so we have explained Docker from the beginning. We’ve covered motivations for using it and the basic concepts. Furthermore, we’ve looked into how to Dockerize an app and in doing so covered some useful Docker commands. There is so much more to know about Docker like how to work with Databases, Volumes, how to link containers and why and how to spin up and manage multiple containers, also known as orchestration.

But this is a series of articles, we have to stop somewhere or this article will be very long. Stay tuned for the next part where we will talk about Volumes and Databases.

Acknowledgments

Thank you Dan Wahlin Twitter for your amazing course on Docker, a lot of things Docker clicked for me because of your course.

Follow me on Twitter, I’m happy to answer your queries and questions and suggestions for topics.

Discussion

pic
Editor guide
Collapse
lankydandev profile image
Dan Newton

Well written Chris. I like seeing more people writing tutorials like a conversation 😊😊

Collapse
andy1 profile image
Andy

If anyone gets a syntax error in their IDE in the app.js code that says "'import' is only available in ES6 (use 'esversion: 6')," follow these instructions: stackoverflow.com/questions/363188...

Collapse
andy1 profile image
Andy

Yet another note: the tutorial doesn't include instructions on how to stop running your container. There has to be a better way than this, but I had to open a new Terminal tab, find the ID of my container by running

docker container ls

. I copied and pasted the id and then ran

docker container stop [ID]

Collapse
softchris profile image
Chris Noring Author

actually, you can run docker ps to get a list of running containers. Then you can find out both name and id, either is good when you try to stop it next which you do with docker stop [id or name]. Usually, you only need the first 2-3 characters of the id

Collapse
andy1 profile image
Andy

Realized this is addressed later in the tutorial, yay :)

Collapse
vp31 profile image
vp-31

to view containers running:
docker ps
to view all created containers:
docker ps -a

Collapse
andy1 profile image
Andy

When you run

docker build -t [YOUR STUFF]

NOTE: the -t flag stands for 'tag.' According to the Docker documentation:
--tag , -t: Name and optionally a tag in the ‘name:tag’ format

Collapse
andy1 profile image
Andy

Another note: when you get to the step where you build your first image, the command would be more accurately described as:

docker build -t YOUR_DOCKER_USERNAME/THE_OS_YOU_SELECTED .

Collapse
softchris profile image
Chris Noring Author

well the tagging is pretty much up to you it could be chris/node-web-app. You are right though that you need to name it in a specific way when you tag it for the container registry for Azure for example dev.to/azure/learn-how-you-can-bui... for your container registry to accept it. I'm sure that's true for AWS as well

Thread Thread
andy1 profile image
Andy

Sweet, thanks!

Collapse
andy1 profile image
Andy

The Docker exec command has to be done on a RUNNING container

Collapse
softchris profile image
Chris Noring Author

fair point Andy. I've updated the text, thanks :)

Collapse
kevinhch profile image
Kevin

Hi, i have a couples of question, I thought that Docker was a tool for not having to have installed any application or tool to work, and in your tutorial you have to create a project with NPM, then I would have to node and npm installed on my local machine? Couldn't I create and generate this entire project from a container without having to install node or npm on my local machine?

Collapse
softchris profile image
Chris Noring Author

You usually use Docker for two distinct things 1) packaging, this is you describing the environment and the app code in Dockerfile and once you run docker build + docker run, you have a running instance that's like a black box. 2) development, you can use it for development. The scenario I've seen most is that developers install all they need on a local machine, then they create a mount point between local app files and a place in the container, once it's up and running, this allows you to change files locally and have that change reflected in the container. The second scenario you are referring to here is doing everything in a container. You would then create a container, attach to it and do local dev in there, it's the less used scenario, cause you would still need to find a way to persist what you do (app code changes) to a mount point on your drive, in case you need to power down and power up the container

Collapse
kevinhch profile image
Kevin

ok, but if i want to create an application with node and react, I installed that on my local machine, in this point, I don't need a container because I have this programs in my local machine, and I share this project, the people just need to exec an npm start to install the packages, docker has a docker hub right? this is not like github? I mean, if I create a container, can I take a snapshot of this container and did a docker push origin to my docker repo to pull on another PC?

Thread Thread
softchris profile image
Chris Noring Author

normally you wouldn't share a container but rather point ppl to a GitHub repo where app code lives, together with a Dockerfile.. a container is a running instance of your app.. Then to restore it you would just to docker build + docker run on the Dockerfile in your GitHub repo

Thread Thread
softchris profile image
Chris Noring Author

as for using Docker Hub, that's used to share Docker images, not containers.. so if you have a Dockerfile you are happy with, you start with an ios image, you add scripts to install things on it you can then build an image from the Dockerfile and push it to Docker Hub, then others can use that image.. I mean it all depends on what you want to achieve.. if you want to share app code, I would say it's GitHub, if its a certain base image + installing things like brew, node.js then I suggest create an image from that and push DockerHub

Thread Thread
kevinhch profile image
Kevin

ok thanks, I need to keep clean my local machine, and I will find the method to develop all my applications on containers. Thanks again man :)

Collapse
poyi_team profile image
PoYi

Thank for the great tutorial.
I'm following it but get problem when running the container.
・After console log is printed, I can't stop the process, event when pressing "Control" + "C"
Is there any resolution except the followings?
・Open new terminal and stop the container.
・Let the container run in background.
Thanks and best regards!

Collapse
softchris profile image
Chris Noring Author

yea that's how you usually do it. I mean run the container in the background and then stop it.

Collapse
poyi_team profile image
PoYi

Thank you for the immediately reply.
I'll try that solution :D
Have a nice day.

Collapse
bubidevs profile image
Andrea Busi

Fantastic article! I really love when I can learn and also try a real example! :)

In my case, I had some problems with the port, to make it work I had to modify my Dockerfile in this way:

EXPOSE ${PORT}

Without using curly brackets, the port wasn't available inside the Nodejs file. Do you know why? I'm using Docker 2.0.0.3 (31259) for macOS.

Collapse
shahabyounas profile image
shahab

Very Well organised. Each step is explained with absolute clarity, Thank you Chris

Collapse
softchris profile image
Chris Noring Author

Thank you Shahab :)

Collapse
ggenya132 profile image
Eugene Vedensky

Hi Chris, why are we specifying 'from/node' when we've already been explicit in the Dockerfile about which os to build from?

Collapse
softchris profile image
Chris Noring Author

hi Eugene. I'm trying to understand what you mean? where are you seeing this ?

Collapse
klausbert profile image
Klausbert

Hi Chris, I was wondering the same thing... the Dockerfile starts with:

FROM node:latest

and the image building command is:

docker build -t chrisnoring/node:latest .

So why you needed to specify node:latest twice?
Thanks!!

Thread Thread
ggenya132 profile image
Eugene Vedensky

Yes thank you for taking the time to be more specific, this is exactly what I was referring to. I've tried without the /node:latest and it produces the same result because of the Dockerfile having that specified.

Thread Thread
softchris profile image
Chris Noring Author

I could be wrong here, but I think you can call it what you want in the docker build command.. so it could be
FROM node:latest
in Dockerfile and
docker build -t chrisnoring/node:dev
It's just a way to give different builds different names.

Collapse
josephalba313 profile image
Joseph Marie Alba

For Windows Docker, use the docker-machine ip instead of localhost - use the command docker-machine ip

$docker-machine ip
192.168.99.101 <-- use this ip address

192.168.99.101:8000

Collapse
abhishekshetty profile image
abhishekshetty

This is only applicable if one is using docker toolbox on windows.
If someone is using docker for windows, it will bind to localhost as in the tutorial.

Collapse
juststarnew profile image
justStarNew

great job

Collapse
rcarneironet profile image
Ray Carneiro

Great article!

In case someone is using Windows 10, like me, when creating the image I had to change Docker for Windows setting under "Settings > Command Line > Enable experimental features" - I was running into some weird issue due to have downloaded an Linux image running on Windows (changing for Linux images didn't work).

Hope it helps someone!

Collapse
orphee profile image
orphee

Thanks for this usefull part.
Isn't it a missing line in the Dockerfile ?

RUN npm install express

Collapse
softchris profile image
Chris Noring Author

hi. Because we at an earlier stage ran npm install --save at our terminal, our express library is now part of package.json. So all we need in our Dockerfile is RUN npm install, that will install everything listed in "dependencies" in our package.json file ( including Express )

Collapse
orphee profile image
orphee

Oh, I miss this step in your tutorial .
It's all good then, thx for the reply !

Thread Thread
softchris profile image
Collapse
okolie profile image
Okolie

Thanks for the tutorial. But it would help to mention that to exit the interactive bash, you'd type ctrl+d to exit.

Collapse
softchris profile image
Chris Noring Author

appreciate that comment Okolie, I will update per your suggestion :)

Collapse
adisreyaj profile image
Adithya Sreyaj

This is as simple as it gets....

Great tut.

Collapse
softchris profile image
Chris Noring Author

that's what I was aiming for :) Glad you like it :)

Collapse
arsh028 profile image
Arsh Radhanpura

Thanks a lot man

Collapse
mdhesari profile image
Mohammad Fazel

Thanks for the amazing article!

for stoppping/removing all available containers we may use in bash :
$ docker stop $(docker ps -aq)
$ docker rm $(docker ps -aq)

Collapse
mmehr2 profile image
Michael L. Mehr

A small thing, but it prevented me from using the Dockerfile at first.
Comments in a Dockerfile start with # not //

Collapse
softchris profile image
Chris Noring Author

Appreciate that feedback Michael :)

Collapse
lnaie profile image
Lucian Naie

maybe it will help mentioning that after changing your app or docker file, the image has to be rebuilt.

Collapse
softchris profile image
Chris Noring Author

thank you for that Lucian. I'll add it :)

Collapse
vp31 profile image
vp-31

to view containers running:
docker ps
to view all created containers:
docker ps -a

Collapse
nicolasilviu profile image
NicolaSilviu

hello, I get this error when trying to run it from my virtual machine, please advise :

silviu@ubuntu:~/dockerProject/nodeStuff$ docker build -t chrisnoring/node:latest .
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&session=ektidoagm9w9a8q0qwcmzjq9h&shmsize=0&t=chrisnoring%2Fnode%3Alatest&target=&ulimits=null&version=1: dial unix /var/run/docker.sock: connect: permission denied

Collapse
softchris profile image
Collapse
nicolasilviu profile image
NicolaSilviu

Thank you Chris, the fix from there worked. The problem was that my user did not had permission to access the /var/run/docker.sock.

Thread Thread
softchris profile image
Chris Noring Author

Great to hear it :)