DEV Community

David Tio
David Tio

Posted on • Originally published at blog.dtio.app

Docker Compose Explained: Multi-Container Stacks (2026)

Quick one-liner: Connect CloudBeaver and PostgreSQL in one compose file. Then scale up to a full four-service Nextcloud stack with shared networks.


๐Ÿค” Why This Matters

In the last post, you created two compose projects: PostgreSQL in one directory, CloudBeaver in another. Each has its own compose file, its own network, its own lifecycle. They can't talk to each other.

That's the problem this post solves. We'll put them in one file, on one network, and CloudBeaver can finally reach PostgreSQL. No custom network commands. No --network flags. Just one docker compose up -d.

Once you've got two services talking, we'll scale up to four. Here's the full Nextcloud stack with MariaDB, Redis, PHP-FPM, and nginx all in one file.

By the end of this post, you'll have:

  • CloudBeaver + PostgreSQL connected in one compose file
  • A four-service Nextcloud stack on a shared network

โœ… Prerequisites

  • Ep 1-7 completed. You know Compose basics like single service per file, .env files, and the up/ps/logs/down workflow.

๐Ÿ“ฆ The Problem: Two Compose Files, Two Networks

Last time you ended up with PostgreSQL in one directory and CloudBeaver in another:

~/
โ”œโ”€โ”€ dtstack-pg/
โ”‚   โ”œโ”€โ”€ docker-compose.yml
โ”‚   โ””โ”€โ”€ .env
โ””โ”€โ”€ dtstack-cb/
    โ”œโ”€โ”€ docker-compose.yml
    โ””โ”€โ”€ ...
Enter fullscreen mode Exit fullscreen mode

Each project gets its own network. PostgreSQL is on dtstack-pg_default, CloudBeaver is on dtstack-cb_default. They can't reach each other. You can't connect CloudBeaver to the database.

That's what multi-service compose fixes. One file, one network, both services talking.


๐Ÿ”ง Step 1: CloudBeaver + PostgreSQL in One File

Create a single directory for your stack:

$ mkdir -p cloudstack && cd cloudstack
Enter fullscreen mode Exit fullscreen mode

Create docker-compose.yml:

services:
  postgres:
    container_name: dtstack-pg
    image: postgres:17
    environment:
      POSTGRES_PASSWORD: ${PG_PASSWORD}
      POSTGRES_DB: ${PG_DATABASE}
    volumes:
      - pgdata:/var/lib/postgresql/data
    networks:
      - dtstack

  cloudbeaver:
    container_name: dtstack-cb
    image: dbeaver/cloudbeaver:latest
    ports:
      - "8978:8978"
    volumes:
      - cbdata:/opt/cloudbeaver/workspace
    networks:
      - dtstack

volumes:
  pgdata:
  cbdata:

networks:
  dtstack:
    driver: bridge
Enter fullscreen mode Exit fullscreen mode

Two services. One networks: block. Both on the same dtstack network.

Create .env:

$ cat > .env << EOF
PG_PASSWORD=docker
PG_DATABASE=testdb
EOF
Enter fullscreen mode Exit fullscreen mode

Two things to notice:

  1. No ports on PostgreSQL. CloudBeaver reaches it on the internal network, so there's no need to expose port 5432 to the host. Only CloudBeaver needs a port mapping since it's the one you access from your browser.

  2. Each service lists networks: - dtstack. This explicitly connects them to the shared bridge network. Compose would create a default network and connect them automatically, but declaring it explicitly makes the intent clear.


๐Ÿš€ Start the Stack

$ docker compose up -d
Enter fullscreen mode Exit fullscreen mode
[+] up 32/32
 โœ” Image postgres:17                Pulled
 โœ” Image dbeaver/cloudbeaver:latest Pulled
 โœ” Network cloudstack_dtstack       Created
 โœ” Volume cloudstack_pgdata         Created
 โœ” Volume cloudstack_cbdata         Created
 โœ” Container dtstack-cb             Started
 โœ” Container dtstack-pg             Started
Enter fullscreen mode Exit fullscreen mode

One command. Seven things done (two images pulled, network, two volumes, two containers). Everything connected.

Verify both services are running:

$ docker compose ps
NAME         IMAGE                        COMMAND                  SERVICE      CREATED      STATUS      PORTS
dtstack-cb   dbeaver/cloudbeaver:latest   "./launch-product.sh"    cloudbeaver  5 min ago    Up 5 min    0.0.0.0:8978->8978/tcp
dtstack-pg   postgres:17                  "docker-entrypoint.sโ€ฆ"   postgres     5 min ago    Up 5 min    5432/tcp
Enter fullscreen mode Exit fullscreen mode

Only CloudBeaver has a port mapping. PostgreSQL is on the dtstack network but invisible to the host. That's exactly what we want.


๐Ÿ” Verify Connectivity

Open http://localhost:8978 in your browser. CloudBeaver loads.

Now add PostgreSQL as a connection in CloudBeaver:

  • Host: postgres (the service name, not an IP address)
  • Port: 5432
  • Database: testdb (from your .env)
  • Username: postgres
  • Password: docker (from your .env)

Connect. It works. CloudBeaver reaches PostgreSQL by service name. No IP addresses. No docker network connect. It just works.


๐Ÿ” Inspect the Network

See what Compose created:

$ docker network inspect cloudstack_dtstack
Enter fullscreen mode Exit fullscreen mode

Look at the Containers section. Both services are listed with their IP addresses:

"Containers": {
    "abc123...": {
        "Name": "dtstack-pg",
        "IPv4Address": "172.19.0.2/16"
    },
    "def456...": {
        "Name": "dtstack-cb",
        "IPv4Address": "172.19.0.3/16"
    }
}
Enter fullscreen mode Exit fullscreen mode

Two containers. One network. Use the service name (postgres, cloudbeaver) when connecting services to each other โ€” not the container name.


๐Ÿ›‘ Tear It Down

$ docker compose down
Enter fullscreen mode Exit fullscreen mode
[+] down 3/3
 โœ” Container dtstack-cb       Removed
 โœ” Container dtstack-pg       Removed
 โœ” Network cloudstack_dtstack Removed
Enter fullscreen mode Exit fullscreen mode

Two containers gone. Network gone. Volumes survive.

Volumes are preserved by default. Check:

$ docker volume ls | grep cloudstack
Enter fullscreen mode Exit fullscreen mode
local  cloudstack_pgdata
local  cloudstack_cbdata
Enter fullscreen mode Exit fullscreen mode

Start the stack again and your database is still there:

$ docker compose up -d
Enter fullscreen mode Exit fullscreen mode

To remove everything including volumes:

$ docker compose down --volumes
Enter fullscreen mode Exit fullscreen mode
[+] down 5/5
 โœ” Container dtstack-pg       Removed
 โœ” Container dtstack-cb       Removed
 โœ” Volume cloudstack_cbdata   Removed
 โœ” Volume cloudstack_pgdata   Removed
 โœ” Network cloudstack_dtstack Removed
Enter fullscreen mode Exit fullscreen mode

๐Ÿ“ฆ Step 2: Scale Up to a Four-Service Nextcloud Stack

Now let's go bigger. Four services in one file, all talking to each other.

Create a single directory for your Nextcloud stack:

$ mkdir -p nextcloud && cd nextcloud
Enter fullscreen mode Exit fullscreen mode

Create docker-compose.yml:

services:
  db:
    container_name: nc-db
    image: mariadb:11
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - dbdata:/var/lib/mysql
    networks:
      - nextcloud

  redis:
    container_name: nc-redis
    image: redis:8.6
    volumes:
      - redisdata:/data
    networks:
      - nextcloud

  php:
    container_name: nc-php
    image: nextcloud:fpm
    volumes:
      - ./html:/var/www/html
    networks:
      - nextcloud

  nginx:
    container_name: nc-nginx
    image: nginx:latest
    ports:
      - "8080:80"
    volumes:
      - ./html:/var/www/html
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
    networks:
      - nextcloud

volumes:
  dbdata:
  redisdata:

networks:
  nextcloud:
    driver: bridge
Enter fullscreen mode Exit fullscreen mode

Four services. One networks: block. All of them on the same nextcloud network.

Create .env:

$ cat > .env << EOF
MYSQL_ROOT_PASSWORD=nextcloud
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
MYSQL_PASSWORD=nextcloud
EOF
Enter fullscreen mode Exit fullscreen mode

Create nginx.conf:

server {
    listen 80;
    server_name localhost;

    root /var/www/html;
    index index.php;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        fastcgi_pass php:9000;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}
Enter fullscreen mode Exit fullscreen mode

Three things to notice:

  1. Volumes are shared. dbdata and redisdata are defined once at the bottom and used by the services that need them.

  2. PHP-FPM and nginx both mount ./html. Both services mount the same host directory at the same container path: /var/www/html. When Nextcloud writes an uploaded file to /var/www/html/data/user1/photo.jpg inside the PHP-FPM container, nginx can immediately serve it from the same path. No copying, no syncing, just one shared directory.

  3. Nginx needs a config to talk to PHP-FPM. The nginx.conf file tells nginx: when you see a .php request, don't serve the raw file. Forward it to the php service on port 9000 via FastCGI. Without this, your browser would download index.php instead of running it.


๐Ÿš€ Start the Nextcloud Stack

$ docker compose up -d
Enter fullscreen mode Exit fullscreen mode
[+] up 19/19
 โœ” Image nextcloud:fpm         Pulled
 โœ” Image nginx:latest          Pulled
 โœ” Image mariadb:11            Pulled
 โœ” Image redis:8.6             Pulled
 โœ” Network nextcloud_nextcloud Created
 โœ” Volume nextcloud_dbdata     Created
 โœ” Volume nextcloud_redisdata  Created
 โœ” Container nc-nginx          Started
 โœ” Container nc-db             Started
 โœ” Container nc-redis          Started
 โœ” Container nc-php            Started
Enter fullscreen mode Exit fullscreen mode

One command. Eleven things done (four images pulled, network, two volumes, four containers). Everything connected.

Verify all services are running:

$ docker compose ps
NAME         IMAGE            COMMAND                  SERVICE   CREATED      STATUS      PORTS
nc-db        mariadb:11       "docker-entrypoint.sโ€ฆ"   db        5 min ago    Up 5 min    3306/tcp
nc-redis     redis:8.6        "docker-entrypoint.sโ€ฆ"   redis     5 min ago    Up 5 min    6379/tcp
nc-php       nextcloud:fpm    "docker-entrypoint.sโ€ฆ"   php       5 min ago    Up 5 min    9000/tcp
nc-nginx     nginx:latest     "/docker-entrypoint.โ€ฆ"   nginx     5 min ago    Up 5 min    0.0.0.0:8080->80/tcp
Enter fullscreen mode Exit fullscreen mode

Only nc-nginx has a port mapping. The other three services are on the nextcloud network but invisible to the host. That's exactly what we want.


๐Ÿ–ฅ๏ธ Use Nextcloud

Open http://localhost:8080. The Nextcloud setup page loads.

Create an admin account, then fill in the database section:

  • Database type: MariaDB
  • Database user: nextcloud
  • Database password: nextcloud
  • Database name: nextcloud
  • Database host: nc-db

Nextcloud setup page showing database configuration with nc-db host and MariaDB credentials

Hit Finish setup. Nextcloud initializes, connects to MariaDB, and drops you into the dashboard.

Upload a file. Create a folder. It works. All four services are talking to each other through that single compose file.

Check the html/ directory on the host. The nextcloud:fpm image populated it on first start:

$ ls html/
Enter fullscreen mode Exit fullscreen mode

You'll see Nextcloud's file structure like index.php, core/, apps/, config/, and more. Nginx is serving from this same directory, so your static files and PHP requests all come from the same source.


๐Ÿ›‘ Tear Down the Nextcloud Stack

To stop and remove containers, volumes, and the network:

$ docker compose down --volumes
Enter fullscreen mode Exit fullscreen mode

--volumes removes named volumes (dbdata, redisdata) but not bind-mounted directories. The html/ directory on your host stays untouched. Remove it manually if you want a clean slate:

$ rm -rf html/
Enter fullscreen mode Exit fullscreen mode

Start again and you'll get a fresh Nextcloud setup.


๐Ÿงช Exercise 1: Producer, Queue, and Worker

In a real production system, you often have long-running tasks that shouldn't block a web request. The solution is a job queue: the web server adds a job, a separate worker picks it up and processes it.

Create a directory and save these two scripts:

producer.py (adds jobs to Redis):

from http.server import HTTPServer, BaseHTTPRequestHandler
import urllib.parse, redis

r = redis.Redis(host='redis', port=6379, decode_responses=True)

class Handler(BaseHTTPRequestHandler):
    def do_GET(self):
        count = r.llen('jobs')
        self.send_response(200)
        self.send_header("Content-Type", "text/html")
        self.end_headers()
        self.wfile.write(f"<h2>Job Queue</h2><p>{count} jobs in queue</p><form method='post'><input name='job' placeholder='Enter job name'><button>Submit</button></form>".encode())

    def do_POST(self):
        length = int(self.headers.get("Content-Length", 0))
        job = urllib.parse.parse_qs(self.rfile.read(length).decode())["job"][0]
        r.lpush('jobs', job)
        self.send_response(302)
        self.send_header("Location", "/")
        self.end_headers()

    def log_message(self, format, *args):
        pass

HTTPServer(("0.0.0.0", 5000), Handler).serve_forever()
Enter fullscreen mode Exit fullscreen mode

worker.py (processes jobs from Redis):

import redis, time, random

r = redis.Redis(host='redis', port=6379, decode_responses=True)

print("Worker ready. Waiting for jobs...")
while True:
    job = r.brpop('jobs', timeout=0)
    if job:
        _, task = job
        print(f"Processing {task}...")
        time.sleep(random.randint(30, 60))
        print(f"Completed {task}")
Enter fullscreen mode Exit fullscreen mode

Your job: Write the compose file to connect all three services.

Hints:

  • Three services: redis, producer, worker
  • All three need to be on the same network
  • Use python:slim for both producer and worker
  • Mount producer.py and worker.py into their respective containers
  • Producer needs port 5000 exposed
  • Worker uses brpop which blocks until a job is available

๐Ÿ“ฆ Exercise 1 Solution

Create a directory and save all three files inside it:

$ mkdir -p prodwork && cd prodwork
Enter fullscreen mode Exit fullscreen mode

docker-compose.yml:

services:
  redis:
    image: redis:latest

  producer:
    image: python:slim
    command: sh -c "pip install redis && python -u /app/producer.py"
    ports:
      - "5000:5000"
    volumes:
      - ./producer.py:/app/producer.py
    working_dir: /app

  worker:
    image: python:slim
    command: sh -c "pip install redis && python -u /app/worker.py"
    volumes:
      - ./worker.py:/app/worker.py
    working_dir: /app
Enter fullscreen mode Exit fullscreen mode

The -u flag forces unbuffered output. Without it, Python buffers print() when there's no terminal, and you won't see worker logs in real time.

No networks: block in the compose file. Compose creates a default network named prodwork_default and connects all three services automatically. You can verify:

$ docker network ls | grep prodwork
80e4fc2182b5   prodwork_default   bridge    local
Enter fullscreen mode Exit fullscreen mode

Then start:

$ docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Watch the worker and start submitting jobs:

$ docker compose logs -f worker
worker  | Worker ready. Waiting for jobs...
Enter fullscreen mode Exit fullscreen mode

Open http://localhost:5000 in your browser, submit a job, and every 30-60 seconds you'll see the worker process it:

worker  | Processing "Generate monthly report"...
worker  | Completed Generate monthly report
Enter fullscreen mode Exit fullscreen mode

The web server never blocked. The job queue handled the delay.


๐Ÿงช Exercise 2: Build a Load Balanced App

Here's a Python app that shows its hostname and a random background color. We'll put nginx in front of it. Writing the compose file is your job.

app.py (stdlib, no external dependencies):

from http.server import HTTPServer, BaseHTTPRequestHandler
import socket, random, time

colors = [
    "#e74c3c", "#c0392b", "#8e44ad", "#2c3e50", "#2980b9",
    "#16a085", "#27ae60", "#d35400", "#f39c12", "#2d3436",
]
color = random.choice(colors)
requests = 0
started = time.time()

class Handler(BaseHTTPRequestHandler):
    def do_GET(self):
        global requests
        if self.path == "/favicon.ico":
            self.send_response(204)
            self.end_headers()
            return
        requests += 1
        self.send_response(200)
        self.send_header("Content-Type", "text/html")
        self.end_headers()
        hostname = socket.gethostname()
        ip = socket.gethostbyname(hostname)
        uptime = int(time.time() - started)
        html = f"""<html><body style="background:{color};font-family:monospace;text-align:center;padding-top:10%">
        <h1 style="font-size:4em;color:white">{hostname}</h1>
        <p style="font-size:1.8em;color:white">{ip}</p>
        <p style="font-size:1.4em;color:white">Requests: {requests} | Uptime: {uptime}s</p>
        </body></html>"""
        self.wfile.write(html.encode())

    def log_message(self, format, *args):
        pass

HTTPServer(("0.0.0.0", 5000), Handler).serve_forever()
Enter fullscreen mode Exit fullscreen mode

nginx.conf (load balance across upstream instances):

upstream backend {
    server web:5000;
}

server {
    listen 80;
    location / {
        proxy_pass http://backend;
    }
}
Enter fullscreen mode Exit fullscreen mode

Hints for the compose file:

  • Two services: nginx and web (python:slim)
  • Nginx needs port 8080 mapped to 80
  • Nginx mounts nginx.conf
  • Web mounts app.py, runs on port 5000 internally (no host port needed)
  • Both on the same network (or just let Compose create the default)

๐Ÿ“ฆ Exercise 2 Solution

Create a directory and save all three files inside it (docker-compose.yml, app.py, nginx.conf):

$ mkdir -p loadbalance && cd loadbalance
Enter fullscreen mode Exit fullscreen mode

docker-compose.yml:

services:
  nginx:
    image: nginx:latest
    ports:
      - "8080:80"
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf

  web:
    image: python:slim
    command: python /app/app.py
    volumes:
      - ./app.py:/app/app.py
    working_dir: /app
Enter fullscreen mode Exit fullscreen mode
$ docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Open http://localhost:8080. You'll see a random color with the container hostname. Refresh. Same result for now.

Next post we'll put this under pressure.


๐Ÿ What You've Built

Feature What It Does
One file, two services CloudBeaver + PostgreSQL on the same network
One file, four services MariaDB, Redis, PHP-FPM, nginx, all in one place
Shared volume mounts PHP-FPM and nginx mount ./html at the same /var/www/html path
Nginx + FastCGI nginx.conf proxies PHP requests to PHP-FPM on port 9000
No unnecessary ports Only the web-facing service is exposed, database and cache stay internal
.env for secrets Passwords live in a file, not in YAML
Redis job queue Producer, worker, and queue. Three services, one compose file
Load balanced app nginx + Python web service, ready for scaling

๐Ÿ‘‰ Coming up: You've got services talking to each other. But what happens when the worker crashes, or the queue suddenly has a hundred jobs? How do you build a stack that holds up?


Found this helpful? ๐Ÿ™Œ

Top comments (0)