DEV Community

Cover image for How to run a multi-container app with Docker compose
Deborah Emeni
Deborah Emeni

Posted on

How to run a multi-container app with Docker compose

What you'll learn

By the end of this section, you'll:

  1. Understand what Docker Compose is and why it’s useful for managing multi-container applications
  2. Learn how a docker-compose.yml file defines and runs an entire application
  3. Run multiple services together as a single system instead of starting containers manually
  4. Configure environment variables inside Docker Compose
  5. Enable communication between services using Docker’s built-in networking
  6. Persist database data using volumes
  7. Deploy a Node.js API with MongoDB using one command
  8. Start, stop, and manage your whole application using Docker Compose commands

Why managing containers manually doesn’t scale

Before Docker Compose, the usual way to run a multi-container setup is to start each container separately, then manually connect everything together.

If you’ve been following along in this series, you already know the building blocks:

  • Dockerfiles package your application into an image
  • Networking allows containers to communicate
  • Volumes preserve your data between restarts

Individually, these are simple.

But once your project has more than one container, the setup quickly turns into a long list of commands you need to remember and repeat every time you run the app.

Take a look at what the manual approach can feel like:

docker network create app-net
docker run mongo
docker run node-api
docker run -e MONGO_URL=...
Enter fullscreen mode Exit fullscreen mode

This works, but there are some clear problems:

  • too many commands to manage

  • easy to forget flags or the correct order

  • harder for teammates to follow

  • not easily reproducible across machines

At some point, it stops feeling like “running an application” and starts feeling like manually configuring containers to work together.

There has to be a simpler way to treat these containers as one application.

What is Docker Compose?

Docker Compose is a tool that lets you define and run multiple containers as a single application.

Instead of starting containers individually and remembering a long list of commands, you describe everything in one place, then start the entire system together.

In practice, that means:

  • you define your services in one file
  • Docker sets up the network for you
  • Docker creates volumes for you
  • and you start everything with a single command

Here’s the mental model I want you to keep in mind:

  • a Dockerfile describes one container
  • a docker-compose.yml file describes your whole application

So rather than managing containers separately, you treat them as one system.

You’ll see terms like services, networks, and volumes as we go. Don’t worry about them yet. We’ll learn them naturally in the hands-on sections.

For now, just remember: Docker Compose helps you run multiple containers together like one application.

Understanding the docker-compose.yml structure

When most people see a YAML file for the first time, it can feel a little intimidating with all the many indentation and keys.

And it looks more complicated than it really is.

So instead of pasting a huge file and trying to explain everything at once, let’s build it gradually.

We’ll start small and add each part step by step.

Let’s start with the smallest possible Compose file:

services:
  app:
    image: node
Enter fullscreen mode Exit fullscreen mode

That’s it.

Let’s break this down together:

  • services → the containers we want Docker to run

  • app → the name of our service (you can choose this)

  • image → the image Docker should use to create the container

So with just these few lines, we’ve already told Docker:

“Start one container called app using the Node image.”

From here, we’ll keep extending the same file.

As our application becomes more complex, we’ll add things like:

  • ports to expose the app to our browser

  • environment for configuration values

  • volumes for persistent data

  • depends_on to control startup order

The key idea is simple: we don’t write everything at once.

We add one piece, understand it, test it, then move on.

That way, the file never feels overwhelming, and you always know exactly what each line is doing.

Next, we’ll put all of this into practice with a hands-on project and apply these concepts in an actual setup.

Setting up the demo project

Before we start writing our docker-compose.yml file, we need a small project to run. The goal is not to build a feature-rich API. We just need something simple that can connect to a database so we can focus on learning Docker Compose.

If you prefer to skip the setup, you can clone the complete project here: https://github.com/d-emeni/node-api-compose-demo

git clone https://github.com/d-emeni/node-api-compose-demo.git
cd node-api-compose-demo
Enter fullscreen mode Exit fullscreen mode

Prerequisites

To follow along, you should already have:

  • Docker installed and working on your machine (Follow this guide)
  • a basic understanding of Docker images and containers (read up here if you're new to containers)
  • familiarity with Dockerfiles (you have already seen this earlier in the series)
  • basic Node.js knowledge (enough to run a small API)

If you have gone through the earlier posts in this series, you are in a good place to continue.

What we’re building

We’ll build a small Node.js API that:

  • connects to MongoDB
  • saves data
  • returns it

That gives us a realistic setup for Docker Compose without adding unnecessary complexity.

Project structure

This is the structure we’ll work with:

node-api/
 index.js
 package.json
 package-lock.json
 Dockerfile
 .dockerignore
 .gitignore
 docker-compose.yml
 README.md
Enter fullscreen mode Exit fullscreen mode

Don’t worry if some of these files look unfamiliar. We’ll walk through the important ones as we go.

Step 1: Creating our docker-compose.yml file

Now that we have a working Node.js API, the next step is to run the API and MongoDB as a single application using Docker Compose.

This is where Compose starts to feel useful.

Instead of:

  • starting MongoDB manually
  • starting the API manually
  • remembering flags like ports, environment variables, and container names

We’re going to describe the entire setup in one file, then run everything together.

Create the file

In the root of your project (the same level as your Dockerfile), create a file named:

docker-compose.yml
Enter fullscreen mode Exit fullscreen mode

In the next section, we’ll start with the smallest Compose configuration and build it up step by step.

Step 2: Start with the smallest working Compose file

Let’s start small and build up from there.

Add this to your docker-compose.yml:

services:
  app:
    build: .
    ports:
      - "3000:3000"
Enter fullscreen mode Exit fullscreen mode

Let’s break down what this means:

  • services is where we define the containers our application needs

  • app is the name of our first service (this is your Node.js API)

  • build: . tells Docker Compose to build an image using the Dockerfile in the current folder

  • ports: "3000:3000" maps port 3000 in the container to port 3000 on your machine

Run it

From the project root, run:

docker compose up --build
Enter fullscreen mode Exit fullscreen mode

If everything is set up correctly, Docker will build the image and start your API container.

You should see Docker building the image, creating the container, and then starting the Node.js server logs.

Don’t worry if you see a MongoDB connection error at the end. That’s expected for now, because we haven’t added MongoDB yet.

Your output should look similar to this:

docker-compose-app-only-startup

Step 3: Add MongoDB as a service

Right now, our Compose file only starts the API container. That’s why the app fails to connect to MongoDB.

The fix is simple:

  1. add a MongoDB service
  2. point our Node.js app to that service using an environment variable

Update your docker-compose.yml

Replace the contents of your docker-compose.yml with this:

services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      MONGO_URL: mongodb://mongo:27017
      DB_NAME: compose_demo
    depends_on:
      - mongo

  mongo:
    image: mongo:latest
    ports:
      - "27017:27017"
Enter fullscreen mode Exit fullscreen mode

What changed?

  • mongo is a new service running the official MongoDB image

  • MONGO_URL is now mongodb://mongo:27017 (the service name becomes the hostname)

  • depends_on tells Docker Compose to start MongoDB before starting the API

Run it again

If your previous Compose run is still active, stop it with:

Ctrl + C
Enter fullscreen mode Exit fullscreen mode

Then start everything again:

docker compose up --build
Enter fullscreen mode Exit fullscreen mode

When you run the command, Docker Compose will go through two phases.

First, it builds and prepares everything for you.

You’ll see Docker pulling the MongoDB image (if you don’t already have it), building your Node.js image from the Dockerfile, and creating the containers.

It should look something like this:

docker-compose-build-phase

After the images are built, Docker starts both containers and begins streaming their logs.

You should see MongoDB starting up first, followed by your API connecting to it.

If everything is working, look for a message like:

Connected to MongoDB at mongodb://mongo:27017

Your output should look similar to this:

docker-compose-running-phase

Step 4: A quick recap of what just happened

Before we move on, let’s pause for a moment and connect the dots.

With one docker-compose.yml file, we:

  • built our Node.js image
  • started a MongoDB container
  • created a shared network automatically
  • connected both services together
  • and launched the entire application with a single command

Instead of running multiple docker run commands and configuring networking manually, you defined the system once and Docker Compose handled the setup for you.

That’s really the core idea behind Docker Compose.

You describe your application in one place, and Docker takes care of creating and running the containers.

Now that everything is up and running, let’s test the API and confirm our application behaves as expected.

Step 5: Testing the application

At this point, both containers are running:

  • the Node.js API
  • the MongoDB database

They’re connected through Docker Compose, and your app should already be talking to MongoDB in the background.

Now let’s quickly verify that everything works as expected.

Step 1: Check the health endpoint

Open a new terminal and run:

curl http://localhost:3000/health
Enter fullscreen mode Exit fullscreen mode

You should see:

{ "status": "ok" }
Enter fullscreen mode Exit fullscreen mode

status ok

This confirms that:

  • the API is running

  • the server is reachable

  • the container started correctly

Step 2: Save some data

Now let’s store something in the database.

curl -X POST http://localhost:3000/notes \
  -H "Content-Type: application/json" \
  -d '{"text":"hello from docker compose"}'
Enter fullscreen mode Exit fullscreen mode

You should get a response similar to:

{
  "message": "Note created",
  "id": "...",
  "note": {
    "text": "hello from docker compose"
  }
}
Enter fullscreen mode Exit fullscreen mode

This tells us the API successfully wrote data to MongoDB.

Step 3: Read the data back

curl http://localhost:3000/notes
Enter fullscreen mode Exit fullscreen mode

You should see your saved note returned in the response.

read-data-docker-compose

If you can create and read notes successfully, your containers are:

  • running

  • connected

  • and communicating correctly

At this point, you officially have a multi-container application running with Docker Compose.

In the next section, we’ll make this setup more practical by adding persistent storage so your database data survives container restarts.

Step 6: Adding persistent storage with volumes

Right now, everything works.

You can create notes, read them back, and the API communicates with MongoDB correctly.

But there’s one problem.

If you stop and remove the containers, all your database data disappears.

That’s because containers are ephemeral by default. When a container is deleted, its filesystem is deleted too.

Let’s prove that quickly.

Stop everything

Press:

Ctrl + C
Enter fullscreen mode Exit fullscreen mode

Then remove the containers:

docker compose down
Enter fullscreen mode Exit fullscreen mode

remove-containers-docker-compose

Now start everything again:

docker compose up
Enter fullscreen mode Exit fullscreen mode

start-container-docker-compose

If you try:

curl http://localhost:3000/notes
Enter fullscreen mode Exit fullscreen mode

You’ll notice your notes are gone.

notes-gone-docker-compose

The database started fresh.

This is not what we want in our applications.

We need our data to survive container restarts.

What are volumes?

Docker volumes are persistent storage managed by Docker.

Think of them as:

  • storage outside the container

  • that containers can mount and reuse

  • even after they are stopped or deleted

So instead of storing MongoDB data inside the container, we store it in a volume.

If you need more practical knowledge about volumes, we covered it in our previous series: "Run a MySQL container with persistent storage using Docker volumes"

Update your docker-compose.yml

Add a volume to the MongoDB service:

services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      MONGO_URL: mongodb://mongo:27017
      DB_NAME: compose_demo
    depends_on:
      - mongo

  mongo:
    image: mongo:latest
    ports:
      - "27017:27017"
    volumes:
      - mongo_data:/data/db

volumes:
  mongo_data:
Enter fullscreen mode Exit fullscreen mode

What this does

- mongo_data:/data/db
Enter fullscreen mode Exit fullscreen mode

This means:

  • mongo_data → Docker volume

  • /data/db → where MongoDB stores its files inside the container

So MongoDB now saves data to the volume instead of the container filesystem.

Even if the container is removed, the volume remains.

Test it

Run:

docker compose up --build
Enter fullscreen mode Exit fullscreen mode

Docker will rebuild the containers and create a new volume for MongoDB. Watch for a line that says the volume was created. Your output should look similar to this:

Create a note again to test persistence (so we have some data to test with):

curl -X POST http://localhost:3000/notes \
  -H "Content-Type: application/json" \
  -d '{"text":"hello again after restart"}'
Enter fullscreen mode Exit fullscreen mode

note-created-docker-compose

Then stop everything:

docker compose down
Enter fullscreen mode Exit fullscreen mode

stop-container-again-docker-compose

Start it again:

docker compose up
Enter fullscreen mode Exit fullscreen mode

start-container-again-docker-compose

Now check:

curl http://localhost:3000/notes
Enter fullscreen mode Exit fullscreen mode

Your data should still be there.

create-notes-again-docker-compose

That’s persistence working.

In the next section, we’ll clean things up further by using environment variables more cleanly with a .env file.

Step 7: Managing configuration with a .env file

So far, everything works.

But our docker-compose.yml is starting to look a little noisy.

Right now we have configuration values hardcoded directly inside the file:

  • database URL
  • database name
  • ports

This works for small demos, but in your actual projects it can quickly become chaotic.

As your app becomes more complex, you don’t want to edit the Compose file every time you change:

  • a port
  • an environment variable
  • or a setting for a different environment (dev, staging, production)

Instead, we separate configuration from infrastructure.

That’s where a .env file helps.

What is a .env file?

A .env file stores environment variables in one place.

Docker Compose automatically reads this file and replaces variables inside docker-compose.yml.

So instead of hardcoding values, we reference them.

This keeps your setup:

  • cleaner
  • easier to change
  • and more portable

Create a .env file

In your project root, create:

touch .env
Enter fullscreen mode Exit fullscreen mode

Add:

PORT=3000
MONGO_URL=mongodb://mongo:27017
DB_NAME=compose_demo
Enter fullscreen mode Exit fullscreen mode

Update docker-compose.yml

Now replace the hardcoded values with variables:

services:
  app:
    build: .
    ports:
      - "${PORT}:3000"
    environment:
      MONGO_URL: ${MONGO_URL}
      DB_NAME: ${DB_NAME}
    depends_on:
      - mongo

  mongo:
    image: mongo:latest
    volumes:
      - mongo_data:/data/db

volumes:
  mongo_data:
Enter fullscreen mode Exit fullscreen mode

What changed?

Instead of:

MONGO_URL: mongodb://mongo:27017
Enter fullscreen mode Exit fullscreen mode

We now use:

MONGO_URL: ${MONGO_URL}
Enter fullscreen mode Exit fullscreen mode

Docker Compose reads the value from .env.

So if you ever want to change something, you only edit one file.

No touching the Compose config.

Run it again

docker compose up --build
Enter fullscreen mode Exit fullscreen mode

This time, Docker doesn’t rebuild everything from scratch. It simply recreates the containers and reuses the existing volume, so startup is much faster. Your output should look similar to this:

run-docker-compose-again

Everything should behave exactly the same.

The difference is that your configuration is now cleaner and easier to manage, and you can change values without editing the docker-compose.yml file.

This becomes more valuable as your application becomes more complex or when you deploy to different environments.

In the next section, we’ll look at a few everyday Docker Compose commands that make managing multi-container apps much easier.

Everyday Docker Compose commands

Now that everything is running, let’s look at a few commands you’ll use regularly when working with Docker Compose.

These make it much easier to start, stop, rebuild, and debug your application during development.

Start all services

docker compose up
Enter fullscreen mode Exit fullscreen mode

Builds (if needed) and starts every service defined in docker-compose.yml.

Start in the background (detached mode)

docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Runs containers in the background so your terminal stays free.

This is how you’ll usually run your app during development.

Rebuild after code changes

docker compose up --build
Enter fullscreen mode Exit fullscreen mode

Forces Docker to rebuild images before starting containers.

Useful when you change your Dockerfile or dependencies.

View logs

docker compose logs
Enter fullscreen mode Exit fullscreen mode

See logs from all services.

Follow logs live:

docker compose logs -f
Enter fullscreen mode Exit fullscreen mode

Very helpful for debugging.

Check running containers

docker compose ps
Enter fullscreen mode Exit fullscreen mode

Shows which services are running and their ports.

Stop everything

docker compose down
Enter fullscreen mode Exit fullscreen mode

Stops and removes containers and the network.

Your data stays safe because it’s stored in volumes.

Reset everything (including database data)

docker compose down -v
Enter fullscreen mode Exit fullscreen mode

Removes containers and volumes.

This wipes the database and gives you a completely fresh start.

Use this when testing or troubleshooting.

At this point, you should feel comfortable managing your entire multi-container app with just a few simple commands.

Wrap up

You started with a single container and gradually built up to a complete multi-container setup.

Along the way, you learned how to:

  • run multiple services with Docker Compose

  • connect containers using service names

  • persist data with volumes

  • clean up configuration using environment variables

  • manage everything with simple Compose commands

You now have a small but realistic Node.js + MongoDB application running exactly how many real-world projects run in development.

What’s next in the Docker learning series?

So far, everything has been running locally on your machine.

You now know how to package an app into containers, run multiple services with Docker Compose, connect them together, and manage them in development.

But local environments are only half the story.

In the next part of this series, we’ll move beyond your laptop and deploy containers to the cloud.

You’ll:

  • understand cloud container platforms like AWS ECS, Google Cloud Run, and Azure Containers
  • deploy a containerized app to the cloud
  • manage and run containers in cloud environments
  • hands-on: deploy an API container to Google Cloud Run (and try the same on AWS and Azure)

By the end, you’ll be able to deploy your containers beyond your local machine and into the cloud.

Top comments (0)