INTRODUCTION
Docker is like a magic box that lets you run apps the same way everywhere whether on your Mac, a friend’s PC, or even in the cloud. No more "But it works on my machine!" problems.
In this guide, we will cover eight parts in docker workshop which is
PART 1: Containerize an application
PART 2: Update the application
PART 3: Share the application
PART 4: Persist the DB
PART 5: Use bind mounts
PART 6: Multi container apps
PART 7: Use Docker Compose
PART 8: Image-building best practices
So we will start from zero-no prior Docker knowledge! so at the end, you will know how to:
✅ Package an app in a container (like a lunchbox).
✅ Update it easily.
✅ Share it with others.
✅ Store data safely.
✅ Run multiple apps together (like a mini internet café).
✅ And much more show all with simple steps and Mac screenshots!
Let’s dive in!
PART 1: CONTAINERIZE AN APPLICATION (Put Your App in a Box)
What’s a Container?
Think of a container like a lunchbox:
Your app is the sandwich.
The Docker container is the lunchbox keeping it fresh.
No matter where you open it (Mac, Windows, Linux), it works the same way!
Before you begin you need to have make all this availiable in your PC
- You have installed the latest version of Docker Desktop.
- You have installed a Git client.
- You have an IDE or a text editor to edit files. Docker recommends using Visual Studio Code.
Steps
- Create a Simple App (Let’s use Python)
Click on Terminal on the dropdwon click on new terminal
and then create a directory and open the directory by runinig the command:
mkdir DevOpsstarter
cd DevOpsstarter
- Get the app
a. Before you can run the application, you need to get the application source code onto your machine.
- Run the command:
git clone https://github.com/docker/getting-started-app.git
b. View the contents of the cloned repository. You should see the following files and sub-directories.
getting-started-app/
.dockerignore
package.json
README.md
spec/
src/
yarn.lock
- Build the app's image To create the image, you’ll need something called a Dockerfile. This is just a plain text file (it doesn’t even need a file extension) that lists step-by-step instructions. Docker reads these instructions to build a special kind of package called a container image, which is like a ready-to-use setup of your app or service.
a. In the "getting-started-app" folder, the same place where the package.json file is create a new file called Dockerfile
- Then add the following content to it:
# syntax=docker/dockerfile:1
FROM node:lts-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
EXPOSE 3000
Notice that this Dockerfile starts off with a node: lts-alpine base image, a small and efficient version of Linux that already has Node.js and Yarn (tools used to run and manage JavaScript apps) built in. Then, it copies your app’s files into this setup, installs everything the app needs, and starts it up.
b. Build the image using the following commands:
In the terminal, make sure you're in the getting-started-app directory. Replace /path/to/getting-started-app with the path to your getting-started-app directory.
cd /path/to/getting-started-app
Build the image.
Run the command: docker build -t getting-started .
The docker build command uses your Dockerfile to create a new image of your app. You should have noticed that Docker downloaded a bunch of things, these things are called "layers." That happened because you told Docker to start with a base image called node:lts-alpine, but since it wasn’t already on your computer, Docker had to download it first.
Once that was done, Docker followed the steps in your Dockerfile: it copied your app’s files into the image and used Yarn to install everything your app needs to run.
The CMD part in the Dockerfile sets the default command that will run when you start a container using this image, so basically, it tells Docker how to launch your app.
The -t flag in the command gives your image a name. In this case, you called it getting-started, so it’s easier to refer to later when you want to run it.
Finally, the . (dot) at the end of the command just means: “look for the Dockerfile in this current folder.”
- Start an app container
Now that your image is ready, you can use the docker run command to start your app inside a container. Think of it like launching your app in its own little box.
a. To run your app, use the docker run command and tell it which image to use, so in this case, the one you just created (called getting-started). This starts your app inside a container.
docker run -d -p 127.0.0.1:3000:3000 getting-started
The -d flag (short for detach) tells Docker to run your app in the background. That means your app keeps running, but you get your terminal back and won’t see the app’s output (logs) unless you check them separately.
The -p flag (short for publish) connects your app running inside Docker to your computer, so you can access it. It works like this:
-p 127.0.0.1:3000:3000
This means: take port 3000 from inside the container (where the app is running), and make it available on your computer at localhost:3000. Without this, you wouldn’t be able to open the app in your web browser.
b. After waiting a few seconds, open your web browser and go to http://localhost:3000. You should see your app up and running!
Add one or two items to your list and check that everything works like it should. You can mark items as done and also delete them. This means your front end (what you see) is successfully saving data to the back end (the storage part).
Now, you have a working to-do list app with some items in it.
If you want to see your running containers, you should notice at least one container using the getting-started image and running on port 3000. You can check this either by using commands in the terminal or by opening Docker Desktop, which has a user-friendly interface for managing containers.
In CLI
Run the docker ps
command in a terminal to list your containers.
In Docker Desktop
select the Containers tab to see a list of your containers.
Summary
In this section, you learned how to create a Dockerfile to build an image. After building the image, you started a container and saw your app running.
PART 2: UPDATE THE APPLICATION (Change the Sandwich)
Why Update?
You wouldn’t eat the same sandwich forever. Apps need updates too!
In Part 1, you put a todo app into a Docker container (a process called containerizing). In this next part, you’ll learn how to update the app and its image. You’ll also see how to stop and delete a running container when you no longer need it.
Update the source code
In the following steps, you'll change the "empty text" when you don't have any todo list items to "You have no todo items yet! Add one above!"
a. In the src/static/js/app.js file, update line 56 to use the new empty text.
`-
No items yet! Add one above!
-
You have no todo items yet! Add one above!
`
b. Build your updated version of the image, using the docker build command.
docker build -t getting-started .
c. Start a new container using the updated code.
docker run -dp 127.0.0.1:3000:3000 getting-started
You will saw an error like this:
docker: Error response from daemon: driver failed programming external connectivity on endpoint laughing_burnell
(bb242b2ca4d67eba76e79474fb36bb5125708ebdabd7f45c8eaf16caaabde9dd): Bind for 127.0.0.1:3000 failed: port is already allocated.
The error occurred because you aren't able to start the new container while your old container is still running. The reason is that the old container is already using the host's port 3000 and only one process on the machine (containers included) can listen to a specific port. To fix this, you need to remove the old container.
Remove the old container
To delete a container, you need to stop it first. Once it’s no longer running, you can go ahead and remove it completely.
You can do this using terminal commands or by using Docker Desktop’s easy-to-use interface, just pick whichever method you’re more comfortable with. But in this article we are using CLI command
Remove a container using the CLI
Get the ID of the container by using the docker ps command.
Use the docker stop command to stop the container. Replace with the ID from docker ps.
docker stop <the-container-id>
Once the container has stopped, you can remove it by using the docker rm
command.
Note
You can stop and remove a container in a single command by adding the force flag to the docker rm command. For example: docker rm -f <the-container-id>
Start the updated app container
Now, start your updated app using the docker run command.
docker run -dp 127.0.0.1:3000:3000 getting-started
Refresh your browser on http://localhost:3000 and you should see your updated help text.
Summary
In this section, you learned how to make changes to your app, rebuild the container to apply those updates, and how to stop and delete a container when you're done with it.
Part 3
SHARE THE APPLICATION
Once you have created a Docker image, you can share it with others. To do that, you need to upload it to a place called a Docker registry. The main one people use is called Docker Hub, it's where the images you've been using came from.
Docker ID
A Docker ID lets you use Docker Hub, which is the biggest place to find and share container images. If you don’t have a Docker ID yet, you can create one for free via https://app.docker.com/signup?
Note, to push an image, you first have to create a respository on Docker Hub.
a. Sign up or Sign in to Docker Hub via https://hub.docker.com/
b. Click the "Create Repository" button to start making a new space where you can store and share your Docker image.
c. For the repository name, type getting-started.
Also, make sure the Visibility is set to Public, so others can find and use your image. Then select create.
Let's try to push an image to Docker Hub.
- In the command line, run the following commmand:
docker push docker/getting-started
From the above image you will see an error like this:
docker push docker/getting-started
The push refers to repository [docker.io/docker/getting-started]
An image does not exist locally with the tag: docker/getting-started
This error is normal because the image doesn’t have the right name yet. Docker is trying to find an image called docker/getting-started, but the one you created is just named getting-started on your computer.
You can confirm this by running the command:
docker image ls
- To fix the issue, start by signing in to Docker Hub with your Docker ID. Use this command in your terminal: docker login YOUR-USER-NAME (replace YOUR-USER-NAME with your actual Docker username):
- Use the docker tag command to rename your getting-started image so it matches what Docker Hub expects. Replace YOUR-USER-NAME with your actual Docker ID:
docker tag getting-started YOUR-USER-NAME/getting-started
- Now run the docker push command again. If you're copying the value from Docker Hub, you can drop the tagname part, as you didn't add a tag to the image name. If you don't specify a tag, Docker uses a tag called latest.
docker push YOUR-USER-NAME/getting-started
Run the image on a new instance
Now that your image is built and uploaded to a registry, you can test it on a fresh system that hasn’t used this image before.
To do that, use Play with Docker, a free online tool that lets you run Docker containers in a clean environment.
NOTE
Play with Docker runs on the amd64 platform. If you're using an ARM-based Mac with Apple silicon, your image may not work there unless it's rebuilt for amd64.
To fix this, rebuild your Docker image using the --platform flag like this: docker build --platform linux/amd64 -t YOUR-USER-NAME/getting-started .
- Open your browser to Play with Docker via https://labs.play-with-docker.com/
- Click Login, then choose docker from the drop-down list to sign in with your Docker Hub account.
- Sign in using your Docker Hub account, then click Start to launch your session on Play with Docker.
- Select the ADD NEW INSTANCE option on the left side bar. If you don't see it, make your browser a little wider. After a few seconds, a terminal window opens in your browser.
- In the terminal, run the following command to start your newly pushed app:
docker run -dp 0.0.0.0:3000:3000 YOUR-USER-NAME/getting-started
You should see Docker download (or "pull") the image from Docker Hub, and then start running it. After a few moments, your app should be up and ready to use.
Tip:
You might have noticed that this time, the port is bound to 0.0.0.0 instead of 127.0.0.1 like before.
Here's the difference:
- 127.0.0.1: Only allows access from the same machine (localhost). The app is not reachable from outside.
- 0.0.0.0: Makes the app available on all network interfaces, so it can be accessed from other machines too.
By using 0.0.0.0, you're allowing the container's port to be open to the outside world which is why it works in environments like Play with Docker.
Summary
In this section, you learned how to share your Docker images by pushing them to a registry like Docker Hub. Then, you tested it by running the image on a completely new instance just like in real-world scenarios.
This is a common setup in CI (Continuous Integration) pipelines:
- The pipeline builds and pushes a new image to a registry.
- Then, a production server or another environment pulls that image and runs it by always using the latest version.
PART 4
PERSIST THE DB
If you noticed that your todo list is always empty every time you start the container that’s expected. But why?
This happens because containers don’t keep data by default. When a container stops or is deleted, any data it stored is lost too. Each time you restart it, it's like a fresh install with no memory of the past.
In the next part, you’ll explore how containers handle data and how to persist it across restarts using things like volumes.
The container's filesystem
When a container runs, it builds its filesystem from the layers in the image it was created from. On top of that, Docker gives each container its own "scratch space" — a writable layer where it can create, modify, or delete files.
Here’s the key point:
This scratch space is private to that container. So:
- Any changes made (like adding a todo item) stay inside that container.
- Other containers using the same image won’t see those changes, because they get their own fresh scratch space.
- Once the container is deleted, its scratch space and all your data is gone too.
To share or persist data, you need a solution outside of the container’s temporary filesystem such as Docker volumes.
Lets try It Out
Let’s walk through an example to see how this actually works.
To try this out, you'll run two separate containers. In the first one, you’ll create a file, and in the second one, you’ll check to see if that file is there.
- Start an Alpine container and create a new file in it.
docker run --rm alpine touch greeting.txt
Tip:
Any command you add after the image name (like alpine) runs inside the container. For example, the command touch greeting.txt
creates a file named greeting.txt in the container’s filesystem.
- Run a new Alpine container and use the stat command to see if the file is there:
docker run --rm alpine stat greeting.txt
You should see a message like this, showing that the file isn’t there in the new container:
The greeting.txt file made in the first container wasn’t found in the second one. That’s because each container has its own separate writable layer. While both containers use the same base image layers, any changes (like new files) are stored in a layer that’s unique to each container and can’t be seen by others.
Container volumes
Earlier, you saw that every time you run a container, it starts fresh based on the image. Any files the container creates or changes are lost when the container is deleted, and those changes are kept separate from everything else.
Volumes help fix this.
They let you connect a folder inside the container to your own computer. So if the container changes something in that folder, it also shows up on your machine — and it won’t be lost when the container stops or restarts.
There are two main kinds of volumes. You’ll learn about both, but you’ll begin with something called a volume mount.
Persist the todo data
By default, the todo app saves its data in a file called todo.db, which is stored at /etc/todos/todo.db inside the container. This file uses something called SQLite, which is just a lightweight way to store data in one file. It’s great for small projects like this, even if it’s not ideal for big apps. Later on, you’ll learn how to use a more advanced database.
Because all the data is in a single file, you can keep your data safe by saving that file outside the container — on your computer. That way, if the container stops or is deleted, the data stays.
To do this, you will use something called a volume, which is like a storage box managed by Docker. You “attach” (or mount) it to the folder where the app saves its data. Then, even if the container goes away, the data in todo.db is saved and can be used again.
You’re going to use a volume mount, which you can think of as a simple storage bucket. Docker takes care of where it’s stored you just need to know its name.
Create a volume and start the container
You can set up the volume and run the container either using the command line or Docker Desktop's visual interface. In this guide, we’ll use Docker Desktop’s graphical interface to keep things simple and easy to follow.
To create a volume
- Select Volumes in Docker Desktop
- In Volumes, select Create.
- Specify todo-db as the volume name, and then select Create.
To stop and remove the app container:
Select Containers in Docker Desktop.
Select Delete in the Actions column for the container.
To start the todo app container with the volume mounted:
- Click on the search bar at the top of the Docker Desktop window.
- In the search window, select the Images tab.
- In the search box, specify the image name, getting-started
- Select your image and then select Run.
- Select Optional settings.
- In Host port, specify the port, for example, 3000.
- In Host path, specify the name of the volume, todo-db.
- In Container path, specify /etc/todos.
- Select Run.
Check to make sure the data is still there.
- Shut down and delete the todo app container. You can do this using Docker Desktop or by running docker ps to find the container ID, then removing it with:
docker rm -f <id>
- Launch a new container by repeating the steps you used earlier.
- Once you’re done reviewing your list, feel free to delete the container.
You’ve just learned how to save data so it’s not lost when the container stops.
Explore what's inside the volume
Many people often wonder, “Where does Docker keep my data when I use a volume?” If you’re curious, you can find out by running the docker volume inspect
command.
docker volume inspect todo-db
You should see output like the following:
The Mountpoint shows the exact place on your computer where the data is stored. Keep in mind that on most systems, you'll need admin or root access to open this folder from your machine.
Summary
In this part, you discovered how to save container data so it remains available even after the container is stopped or removed.
PART 5
USE BIND MOUNTS
In Part 4, you used something called a volume mount to save your app’s data, so it doesn’t get lost when the container stops. Volume mounts are great when you need a safe place to store your app’s files.
Now you’ll learn about another type called a bind mount. This lets you share a folder from your computer directly with the container. So, if you're working on your app’s code and save a change, the container sees that change right away.
This is super helpful while developing, because tools like nodemon (which we will use in this chapter) can watch your files and automatically restart the app whenever you make edits. Most programming languages have tools like this, and it makes testing and building apps much easier.
Quick volume type comparisons
The following are examples of a named volume and a bind mount using --mount:
- Named volume: type=volume,src=my-volume,target=/usr/local/data
- Bind mount: type=bind,src=/path/to/data,target=/usr/local/data
The following table outlines the main differences between volume mounts and bind mounts.
Feature | Volume Mount | Bind Mount |
---|---|---|
Managed by | Docker | You (host OS) |
Location | Inside Docker's storage | Anywhere on your file system |
Use case | Production data, persistent storage | Development, local file access |
Portability | High (works across systems) | Low (host-specific paths) |
Docker backup tools | Supported | Not supported |
Performance | Generally better | Can be slower (especially on macOS) |
If you're developing an app, use bind mounts.
If you're running production containers or need reliable data storage, use volumes.
Trying out bind mounts
Before we dive into using bind mounts for app development, let’s do a quick hands-on test. This simple experiment will help you see how bind mounts actually work in a real situation.
- Make sure your getting-started-app folder is located in a part of your computer that Docker Desktop is allowed to access. Docker has a setting that controls which folders on your machine can be shared with containers. If your folder isn’t in one of those approved locations, Docker won’t be able to use it.
To find or change this setting, check Docker Desktop’s file sharing options.
The File Sharing tab only shows up when Docker is running in Hyper-V mode. That’s because in WSL 2 mode and Windows container mode, file sharing is handled automatically, so you don’t need to set it up manually.
- Open a terminal and change directory to the getting-started-app directory.
- Run this command to start a terminal session inside an Ubuntu container, while also linking a folder from your computer (on Mac/Linux):
docker run -it --mount type=bind,src="$(pwd)",target=/src ubuntu bash
The --mount type=bind
option tells Docker to link a folder from your computer (in this case, your current working directory, like getting-started-app) into the container. The src
is the folder on your computer, and the target
is the location inside the container where that folder will show up (here, /src).
- Once you run the command, Docker opens a bash terminal where you can type commands, starting in the main (root) folder of the container’s file system. Then Run the command: pwd (This command displays the full path of the current working directory (the directory you're currently in).
Run the command ls (used to display the contents of a directory, including files and subdirectories)
- Change directory to the src directory. This is the folder you linked (mounted) when you started the container. If you list the files here, you will see the exact same files that are in your getting-started-app folder on your computer.
Run the command: cd src
Run this command ls
to list the item in that folder
- Create a new file named myfile.txt.
To create a new file myfile.txt, run the command:
touch myfile.txt
Run the command ls
to list the item in the file
- Open the getting-started-app folder on your computer and you will see that the file myfile.txt is inside it.
- On your computer, go to the getting-started-app folder and delete the myfile.txt file.
- In the container, list the contents of the app directory once more. Observe that the file is now gone.
- Exit the interactive container session by pressing Ctrl + D on your keyboard.
That’s a quick overview of bind mounts! This showed how files can be shared between your computer and the container, and how any changes you make appear right away in both places. Now you’re ready to use bind mounts to help build and test your software.
Development containers
Using bind mounts is a popular way to develop locally because you don’t have to install all the build tools and software on your own computer. Instead, with just one docker run
command, Docker downloads everything your app needs, making it easier and faster to get started.
Run your app in a development container
Here’s how to run a development container with a bind mount that will:
- Mount your source code into the container
- Install all dependencies
- Start nodemon to watch for filesystem changes
Using Mac/linux
- From inside the getting-started-app folder on your computer, run this command:
docker run -dp 127.0.0.1:3000:3000 \ -w /app --mount type=bind,src="$(pwd)",target=/app \ node:18-alpine \ sh -c "yarn install && yarn run dev"
Here’s what each part of the command means:
-
-d -p 127.0.0.1:3000:3000
— Run the container in the background (detached mode) and connect port 3000 on your computer to port 3000 inside the container, but only accessible from your machine (localhost). -
-w /app
— Set the working directory inside the container to /app, so commands run from there. -
--mount type=bind,src="$(pwd)",target=/app
— Link (bind mount) your current folder on your computer ($(pwd)
) into the container’s /app folder. -
node:18-alpine
— Use this lightweight Node.js image as the base for your container. This matches what your app uses in its Dockerfile. -
sh -c "yarn install && yarn run dev"
— Run a shell command: first install the app’s dependencies withyarn install
, then start the development server withyarn run dev
. The dev script uses nodemon, which watches for file changes and restarts the app automatically.
Basically, this command sets up your app inside the container, installs what it needs, and runs it so you can develop with live updates.
- You can check what’s happening by running
docker logs <container-id>
. When you see this message, it means your app is up and running and ready to use:
When you’re finished checking the logs, press Ctrl + C to stop and exit the log view.
Develop your app with the development container
Make changes to your app files on your computer, and you will see those updates happen instantly inside the running container.
- In the src/static/js/app.js file, on line 109, change the "Add Item" button to simply say "Add":
_- {submitting ? 'Adding...' : 'Add Item'}
- {submitting ? 'Adding...' : 'Add'}_
Save the file.
- Refresh your web browser, and you should see the update show up almost right away thanks to the bind mount. Nodemon notices the change and restarts the server automatically. It might take a few seconds for the server to restart, so if you see an error, just wait a moment and refresh again.
Go ahead and make any other changes you want. Every time you save a file, those updates will instantly show up inside the container thanks to the bind mount. Nodemon will spot the changes and automatically restart the app for you. When you’re finished, stop the container and create a new image by running:
docker build -t getting-started .
Summary
Now you can keep your database data saved and watch your app update live while you develop without having to rebuild the image every time.
Besides volume mounts and bind mounts, Docker also offers other ways to connect storage and manage data for more advanced needs.
PART 6
MULTI CONTAINER APPS
So far, we have been working with apps that run inside a single container. Now, we are going to add MySQL to your setup. A common question is: “Where should MySQL run? Should it be installed inside the same container, or run separately?”
Usually, it’s best to have each container handle one job and do it well. Here are a few reasons why running MySQL in its own separate container is a good idea:
Here’s a simpler way to explain those points:
- There is possibility that we need to scale APIs and front-end separately from your database.
- Running MySQL in its own container lets you update or change its version without affecting the rest of your app.
- When working on your computer, you might run the database in a container, but in real production, you might use a managed database service instead — so you don’t want to include the database inside your app container.
- Containers are designed to run one main process. Running multiple processes (like your app and database together) means you’d need extra tools to manage them, which makes things more complicated when starting or stopping the container.
Container networking
By default, containers run in their own isolated environments and don’t automatically know about other containers on the same machine.
So, how can one container (like your app) talk to another (like a MySQL database)?
The answer is: networking.
When you put both containers on the same Docker network, they can find and talk to each other just like they're on the same private network.
Start MySQL
You can connect containers to a network in two ways:
Assign the network when you start the container – This is the most common method. You specify the network as part of the
docker run
command.Connect an already running container to a network – If the container is already running, you can use
docker network connect
to link it to a network afterward.
In the next steps, we will first create a Docker network. Then, when we start the MySQL container, we will then attach it to that network right from the beginning so it can communicate with other containers.
- Create the network
Run the following command to create a new Docker network named todo-app:
- Start a MySQL container and connect it to the same network as your app. You’ll also set a few basic settings (like password and database name) so MySQL knows how to set itself up when it starts.
Run this command on MAC/LINUX/GITBASH
docker run -d \
--network todo-app --network-alias mysql \
-v todo-mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=todos \
mysql:8.0
In the command you just saw, there’s a part called --network-alias
. Don’t worry about it for now — you’ll learn what it means and how it works a bit later.
Tip: In the command above, you might see a volume called todo-mysql-data
connected to /var/lib/mysql
, which is where MySQL saves its data. You didn’t manually create this volume with a command, Docker saw that you needed it and automatically made it for you.
- To make sure your database is working, try connecting to it and check if the connection is successful.
docker exec -it <mysql-container-id> mysql -u root -p
When asked for the password, type secret. Once inside the MySQL command line, list the databases to make sure you see the one called todos.
mysql> SHOW DATABASES;
- Leave the MySQL command line by typing
exit
so you go back to your regular terminal on your computer.
You now have a todos database set up and ready to be used in your app.
Connect to MySQL
Now that MySQL is running, you might wonder how to connect to it from another container. Since each container has its own IP address, how do they find each other?
To help answer this and learn more about how containers talk to each other, you’ll use a special container called nicolaka/netshoot. It comes with handy tools to help you check and fix network problems between containers.
- Start a new container using the nicolaka/netshoot image and connect it to the same network as your other containers so it can communicate with them.
Run this command:
docker run -it --network todo-app nicolaka/netshoot
- Inside the container, we will run the
dig
command, which helps look up addresses. So we will use it to find the IP address of the container named mysql.dig mysql
You should get output like the following below.
In the “ANSWER SECTION,” you’ll see that the name mysql points to an IP address (like 172.20.0.2, though yours will probably be different). Normally, “mysql” isn’t a real website or address, but Docker knows how to match that name to the right container because of the network alias you set earlier.
This means your app can just use the name mysql to connect to the database, it doesn’t need to know the actual IP address.
Run your app with MySQL
The todo app lets you set a few simple settings to connect to MySQL by using environment variables. These are:
- MYSQL_HOST: the name of the MySQL server to connect to
- MYSQL_USER: the username for logging in
- MYSQL_PASSWORD: the password for that user
- MYSQL_DB: the specific database to use after connecting
Note:
Using environment variables to set connection details is okay for testing and development, but it’s not recommended for apps running in real, live environments (production).
A more secure way is to use secrets, which many container tools support. These secrets are usually stored as files inside the container instead of just environment variables.
Many apps, like MySQL and the todo app, let you use special environment variables ending with _FILE
. For example, if you set MYSQL_PASSWORD_FILE
, the app will read the password from a file instead of from a plain environment variable.
Docker itself doesn’t manage this automatically — your app has to be programmed to check for these _FILE
variables and read the secret from the file.
we’re now ready to start a container that’s set up for development.
- Set up your container by providing all the environment variables from before (like MYSQL_HOST, MYSQL_USER, etc.) and connect it to your app’s network. Before running the command, make sure you’re inside the getting-started-app folder on your computer. Using MAC/LINUX
docker run -dp 127.0.0.1:3000:3000 \
-w /app -v "$(pwd):/app" \
--network todo-app \
-e MYSQL_HOST=mysql \
-e MYSQL_USER=root \
-e MYSQL_PASSWORD=secret \
-e MYSQL_DB=todos \
node:18-alpine \
sh -c "yarn install && yarn run dev"
- If you check the container’s logs using
docker logs -f <container-id>
, you should see a message that shows it’s connected to and using the MySQL database. The message is seen below
- Open the app in your web browser and add some tasks to your todo list.
- Connect to the MySQL database and check to make sure the items you added are saved there. Don’t forget, the password to log in is secret.
docker exec -it <mysql-container-id> mysql -p todos
Once you’re in the MySQL command line, run this command:
select * from todo_items;
Now confirm the items we added earlier, so your table might look different because it has your own todo items, but you’ll be able to see that your tasks are saved there.
Summary
Now you have an app that saves its data in a separate database running in another container. You also learned how containers connect with each other over a network using DNS to find services.
PART 7
USE DOCKER COMPOSE
Docker Compose is a tool that makes it easy to set up and manage multiple containers for your app. You write a simple file (called a YAML file) that lists all the parts of your app, and then with just one command, you can start everything or stop it all.
The best part is that this file lives in your project folder and gets saved with your code. This means anyone who wants to work on your project can just download it and start the app quickly using Compose. Lots of projects on places like GitHub or GitLab use this method now.
CREATE THE COMPOSE FILE
Inside the getting-started-app folder, create a new file and name it compose.yaml.
!compose yaml](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/igxbgz07ua2vmqhzbyq0.png)
DEFINE THE APP SERVICE
In part 6, you used the following command below to start the application service:
docker run -dp 127.0.0.1:3000:3000 \
-w /app -v "$(pwd):/app" \
--network todo-app \
-e MYSQL_HOST=mysql \
-e MYSQL_USER=root \
-e MYSQL_PASSWORD=secret \
-e MYSQL_DB=todos \
node:18-alpine \
sh -c "yarn install && yarn run dev"
Now we are going to write the same setup inside the compose.yaml file so Docker can use it automatically.
- Open compose.yaml in a text or code editor, and start by defining the name and image of the first service (or container) you want to run as part of your application. The name will automatically become a network alias, which will be useful when defining your MySQL service.
services: app: image: node:18-alpine
- Usually, you'll find the
command
line placed near theimage
line in the compose.yaml file, but it doesn't have to be in a specific order. Now, go ahead and add thecommand
section to yourcompose.yaml
file. This tells Docker what the container should do when it starts.
services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
- Now take the part of the command that says -p 127.0.0.1:3000:3000 (which maps your computer’s port to the container’s port) and add it under the ports section of the service in your compose.yaml file. This lets you open the app in your browser at localhost:3000 just like before.
`services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 127.0.0.1:3000:3000`
This tells Docker to make the app available on your computer at port 3000.
- Now move the part of the command that sets the working directory (-w /app) and the part that connects your folder to the container (-v "$(pwd):/app"). In the compose.yaml file, this is done using working_dir and volumes.
You don’t need to type the full path — Docker Compose lets you use relative paths (like .:/app, where . means “this folder”).
Here’s how that looks in simple terms:
services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 127.0.0.1:3000:3000
working_dir: /app
volumes:
- ./:/app
- Lastly, move the environment variables (the ones you previously set with -e) into your compose.yaml file by using the environment section.
In simple terms, environment variables are like settings your app needs to connect to things like the database.
Here’s how you add them:
services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 127.0.0.1:3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DB: todos
This setup tells the app how to connect to the database — using the host name mysql, a username, a password, and the database name. Now everything your app needs is inside the compose.yaml file.
DEFINE THE MySQL SERVICE
Now let’s set up the MySQL part of the app inside the compose.yaml file.
Earlier, we started MySQL using a command like this below in the terminal:
docker run -d \
--network todo-app --network-alias mysql \
-v todo-mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=todos \
mysql:8.0
- Start by creating a new service in your
compose.yaml
file calledmysql
. This name will automatically act like a nickname (network alias) so other parts of your app can find it easily. Also, tell Docker which image to use for this service (in this case, the MySQL database image).
services:
app:
# The app service definition
mysql:
image: mysql:8.0
- Next, set up the volume for the MySQL service. When you used
docker run
, Docker made the volume for you automatically, but with Compose, you have to create it yourself. To do this, first list the volume at the top of yourcompose.yaml
file under a section calledvolumes:
. Then, in the MySQL service settings, tell it to use that volume by its name. Just giving the volume name is enough—Docker will use the usual settings.
services:
app:
# The app service definition
mysql:
image: mysql:8.0
volumes:
- todo-mysql-data:/var/lib/mysql
volumes:
todo-mysql-data:
- Lastly, you need to add the environment variables. These are simple settings that tell the MySQL service important details like the username, password, and database name it should use.
`services:
app:
# The app service definition
mysql:
image: mysql:8.0
volumes:
- todo-mysql-data:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: secret MYSQL_DATABASE: todos volumes: todo-mysql-data:`
- Now, your full
compose.yaml
file should look something like this:
`services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 127.0.0.1:3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DB: todos
mysql:
image: mysql:8.0
volumes:
- todo-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos
volumes:
todo-mysql-data:`
Run the application stack
With our compose.yaml
file ready, we can now start your app easily.
- Before starting, double-check that no other copies of the containers are running. You can use
docker ps
to see running containers, and if you find any, usedocker rm -f <container-ids>
to stop and remove them. As you can see below how we stop and removed two active containers that was running
- Run the command
docker compose up -d
to start all parts of your app at once. The-d
flag makes everything run in the background so you can keep using your terminal. When you run the command you will see something like this below
You’ll see that Docker Compose made a special storage space (called a volume) and a network for your app. It does this automatically to help the different parts of your app talk to each other, so you didn’t have to set it up yourself in the file.
- Check the logs by running
docker compose logs -f
. This shows messages from all the parts of your app mixed together in one place. It’s really handy to see how things are working at the same time. The-f
means the logs keep updating live as new messages come in.
If you already ran this command, you’ll see something like this:
At the start of each line in the logs, you’ll see the name of the service that created that message (it’s usually in color to make it easier to tell them apart).
If you only want to see messages from one specific part of your app, just add the service name to the end of the command. For example:
'docker compose logs -f app'
This will only show logs from the app service.
- Now, you can open your web browser and go to http://localhost:3000. Your app should be up and running there.
See the app stack in Docker Desktop Dashboard
If you open Docker Desktop, you’ll see a section called getting-started-app. That’s just the name of your project, and it comes from the name of the folder where your compose.yaml
file is saved.
When you click to expand it, you’ll see the two containers you set up — one for your app and one for MySQL. Their names are easy to understand because they follow a pattern like:
[service-name]-[number]
For example, app-1
or mysql-1
. This helps you quickly know which one is which.
TEAR IT ALL DOWN
When you're done and want to shut everything down, just run the command docker compose down
, or click the trash can icon in Docker Desktop next to your app. This will stop the containers and remove the network they were using.
Heads up: When you use docker compose down
, it doesn't automatically delete the named volumes listed in your Compose file. These volumes hold your data, so they stick around unless you specifically tell Docker to remove them by adding the --volumes
option to the command.
Also, if you're using Docker Desktop and delete your app from the dashboard, it won’t remove the volumes either. You’ll need to delete them manually if you want them gone.
Summary
In this part, you learned how Docker Compose makes it easier to manage apps that use more than one container. Instead of running lots of commands, you can describe everything your app needs in one file, and then start or stop everything with just one simple command. This also makes it easier to share your app with others.
PART 8
IMAGE-BUILDING BEST PRACTICES
Image layering
With the docker image history
command, you can look at the steps that were taken to build a Docker image—kind of like checking the recipe that was followed. Each line shows one layer that was added, along with the command that created it.
- Run the following command in your terminal to see the different layers that make up your getting-started image: docker image history getting-started
This will show you a list of steps (layers) that were used to build the image, including things like which base image was used, any files copied, packages installed, and other commands run during the build. It's like looking at the construction steps of the image, from bottom to top.
You should get output that looks something like the following.
Each line you see is like a step in a recipe. The line at the bottom shows the base image (the starting point), and each line above it shows something that was added later. This view helps you understand how the image was built and lets you spot which parts take up the most space—so if your image is too big, you can figure out why.
- You might see that some of the lines are cut short (truncated). If you want to see the whole thing without anything being cut off, you can add
--no-trunc
to the command. That way, all the details will be shown in full.
docker image history --no-trunc getting-started
Layer caching
Now that you’ve seen how the image layers work, here’s an important tip to make your builds faster: if one layer changes, everything built after it has to be rebuilt too.
Check out this Dockerfile you made for the getting started app to see how this happens.
Let’s go back to what the image history showed: every line in your Dockerfile becomes a separate layer in the image. You might’ve noticed that when you made a small change, like updating your app, Docker reinstalled all your packages again—even though they didn’t change. That’s a waste of time and effort.
Here’s how to fix it: rearrange your Dockerfile so Docker installs your dependencies before it copies the rest of your app code. For Node.js apps, this means:
a. First, copy in just the package.json
file.
b. Then install the dependencies.
c. Finally, copy in the rest of the app.
By doing this, Docker will only reinstall packages if package.json
changes, saving time during rebuilds.
- Change the Dockerfile so that it first brings in the file that lists your app’s tools (
package.json
), installs those tools, and then adds the rest of your project files. This way, Docker won’t reinstall everything every time you make a small change to your code—it’ll only do that if the list of tools changes.# syntax=docker/dockerfile:1 FROM node:lts-alpine WORKDIR /app COPY package.json yarn.lock ./ RUN yarn install --production COPY . . CMD ["node", "src/index.js"]
- Create a fresh version of your app’s image by running the
docker build
command. This tells Docker to put everything together based on your updated instructions.
docker build -t getting-started .
You should see output like the following.
- Go to the
src/static/index.html
file and change the title of the page. For example, change it from whatever it is now to say: "The Awesome Todo App". This is just a small update to see your changes in action.
- Now run the command to build your Docker image again:
docker build -t getting-started .
This time, the build process should be faster for some steps. That’s because Docker notices that some things didn’t change (like the dependencies), so it reuses them instead of redoing everything from scratch.
Multi-stage builds
Multi-stage builds are a smart way to break your image creation into steps. Here's what that means in simple terms:
- You use one part to build the app, where you might need tools and libraries just to help create the final version.
- Then, you create a second, much smaller part, where you only keep what the app needs to run, not all the extra tools used during building.
This approach helps you:
- Keep things clean by not mixing build tools with your actual app.
- Make your final image smaller, faster, and easier to share or deploy.
Maven/Tomcat example
When you're building Java apps, you need some heavy tools like the JDK (Java Development Kit) and build tools like Maven or Gradle to put your code together. But once your app is built, you don’t need those tools anymore to actually run the app.
So instead of putting everything into one image (which makes it big and messy), multi-stage builds let you use those tools only while building, and then leave them out of the final product. This way, your final app image is lighter, faster, and cleaner — it only contains what’s truly needed to run the app.
syntax=docker/dockerfile:1
FROM maven AS build
WORKDIR /app
COPY . .
RUN mvn package
FROM tomcat
COPY --from=build /app/target/file.war /usr/local/tomcat/webapps
In simple terms: this setup has two steps.
- The first step (called build) uses Maven to put your Java app together.
- The second step (starting with
FROM tomcat
) is where you take the finished app from the first step and place it into a clean, ready-to-run environment.
Only the second step ends up in the final Docker image, so the extra tools used during the build (like Maven) aren’t included in the final product. This keeps the image small and tidy.
If needed, you can also choose to stop after a certain step using the --target
option.
React example
When you build a React app, you need Node.js to turn your code (like JSX and SASS) into regular files like HTML, JavaScript, and CSS that browsers can understand. This is called compiling.
But once that’s done, for running the app in production, you don’t need Node.js anymore. Instead, you can just serve those ready-made files using a simple web server like nginx inside a lightweight container. This keeps things faster and simpler.
`# syntax=docker/dockerfile:1
FROM node:lts AS build
WORKDIR /app
COPY package* yarn.lock ./
RUN yarn install
COPY public ./public
COPY src ./src
RUN yarn run build
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html`
In the earlier example, the Dockerfile first uses a node image to build the app (which helps speed things up by reusing parts it already made), then it takes the finished files and puts them into a simple nginx web server container to run the app.
Summary
In this part, you learned some good tips for building images, like saving work with layer caching and using multi-step builds to make smaller, cleaner images.
Top comments (0)