It’s been a while since your last encounter with Don Porto. The smell of fish still in the air, but there are some questions left unanswered. Like what is all that output we got in the console? And how does the image actually get built? And how are you going to send this amazing application of yours to your friend in Guadalajara?
You go down to the docks in hope to find Don Porto again. 🐟
Don Porto
Hey, don Porto, it’s me, your favorite student.
Who?
You know I made that amazing application and you packaged it for me. It’s called masterpiece.
Ah yes, the annoying one, now I remember. What do you want?
I was wondering if you could tell me more about how images are built.
Ah... as I said annoying. I hope you remember that Docker is a platform for building, running and shipping applications. It’s very helpful in situations like yours. The code works on your machine but not on your friends. Tell me, how did you explain to your friend how to run this amazing application of yours.
Well I wrote him a message:
1. clone the repo from my github account 💻
2. install node 💡
3. run npm install for dependencies ⌛
4. run node index.js and voila, magic happens 🎉
You know they speak Spanish in Guadalajara? No wonder he got confused... So, even though this is a pretty simple app there is still a possibility of doing something wrong in any of these steps. Now imagine if you had a much better app then yours.
Ok, I'll try. I don’t think such a thing exists.
Ha! On really good applications this would be even many more complicated, you would have to go through a lot of more than just four steps. There would be a lot of documentation, and maybe you would have to do this on many computers, not just one.
Yes, and who has the time? I have to go home and have dinner with my cat. She was pretty angry the last time.
Yes I can see all the scratch marks. Anyway... wouldn’t it be great that your friend doesn't have to do any of these steps? We just send him a file and he can run the whole application with everything already configured?
Well, yes I guess I would have much more friends then.
No, no, you wouldn’t. That’s what Docker does for us. We package all of our code and instructions on how to run this code, into a single file and then when someone downloads it there is no chance of error. If it works on our machine it will work on theirs also.
Aha. I get it. And what about building the image? What exactly happens?
Building masterpiece 🚧
Let's start with our Dockerfile, step by step.
FROM node:16-alpine
Run this command and watch what happens.
docker build -t masterpiece .
As you can see the build was very fast because we only had one command. We also used a very small linux base image, alpine. There are a lot of images available at the Docker Hub and you should choose the one that suits your needs. A smart person would now ask why do we even need the base image?
What? Oh, yes why do we need...
Save it. If I were to tell you to install Chrome on a machine that doesn't have an OS. What would you do?
Well, I would... I would...
You would first need to install some OS, then go to the default browser, search for Chrome, download it and execute it. You can think of base image as a starting point for our custom image. Let's see what we have inside of it.
Ok, I will run this container interactively with shell inside of it. Then I can do ls
to see all the files inside.
docker run -it masterpiece sh
Heeeey, where are my files? I have much more than this.
Calm down. We haven't copied your precious files yet. Add the COPY . /app
or WORKDIR /app and COPY . .
commands to your Dockerfile, then build the image again.
FROM node:16-alpine
WORKDIR /app
COPY . .
Aha, so I can use an absolute or relative path. Cool.
As you can see the build took a little bit longer this time. Now when you run ls
inside the container shell you will see all of your files.
Great. But wait. My awesome application is still not running. I mean I am running a shell script inside of the container but where is my masterpiece?
Ah, you noticed that, didn't you. So in order for us to run your "masterpiece" we have to specify the command to execute when running our container.
So, something like this?
docker run masterpiece node index.js
Yes, but this is not very practical. Each time we run the container we have to specify the command to execute. We could add an additional command to our Dockerfile.
Is that what CMD ["node", "index.js"]
does?
Yes, this fish air is doing you good. Add that to your Dockerfile and build the image again.
FROM node:16-alpine
WORKDIR /app
COPY . .
CMD ["node", "index.js"]
Now when you run the container your app is starting.
Ok this is great but as I heard it on the streets the CMD command can be overwritten. Each time I start the container I can specify a command to run instead of the CMD command.
Well I don't know about these streets you are visiting, but yes you are right someone could still do this docker run masterpiece echo Fluffy
Awwwww... you remembered my cat's name. 😊
It's on your forehead remember? If you want to stop this from happening instead of using CMD you can use ENTRYPOINT ["node", "index.js"]
. But, let's leave the CMD for now.
I have another question. If I send this file to my friend he could delete some files inside that container. I don't think that's very smart. Can we prevent this?
Oh, yes we wouldn't want to mess up this perfection. You can specify a user inside your Dockerfile. Now your Dockerfile should look something like this:
FROM node:16-alpine
RUN addgroup friends && adduser guadalajara -S -G friends guadalajara
USER guadalajara
WORKDIR /app
COPY . .
CMD ["node", "index.js"]
You build the image again, run the container and then execute whoami
inside the shell to see the current user.
Ok and now I can't create a new directory or delete anything. Great.
You can also specify the port on which your application is running. By adding EXPOSE 3000
to your Dockerfile you aren't actually exposing the port, it's just part of the documentation.
Aha, yes I remember to actually expose the port I have to run docker run -p 3000:3000 masterpiece .
Yes, great. You may have noticed that each time we build our image we get the output in the console. This shows us that our image is built layer by layer. Can you guess why this is better?
Is it because when we change something only that part and all the parts after that will get rebuilt?
Ding, ding, ding we have a winner. Let's check the actual history of how our image was built. Run this command:
docker history masterpiece
We read it bottom to top. I highlighted the part which comes from alpine base image, so that you don't get confused. You can clearly see how the image was built. We built our custom image on top of the base image. If you want to see all the images you currently have just run docker images
or docker image ls
Shipping our image 🚢
I don't have much time, it's almost dinner. I can feel Fluffy is getting angry, so tell me how do I share this image?
Yes, we wouldn't want to upset him. First create a repository on Docker Hub called fluffy. After that we have to tag your masterpiece.
docker image tag masterpiece fluffy/masterpiece:latest
Make sure the tag has the same name as the repo to which you are pushing. Then run docker login
, sign up with your credentials and push the image to Docker Hub.
docker push fluffy/masterpiece:latest
Your image is now on Docker Hub and anyone can pull it with docker pull fluffy/masterpiece:latest
. Although I don't see why anyone would...
Thank you don Porto. Can we talk next time about containers? And what about volumes? And...
Ok, ok I get it you are annoying. Now go home to Fluffy. 🐱
Top comments (0)