Quick one-liner: Connect CloudBeaver and PostgreSQL in one compose file. Then scale up to a full four-service Nextcloud stack with shared networks.
๐ค Why This Matters
In the last post, you created two compose projects: PostgreSQL in one directory, CloudBeaver in another. Each has its own compose file, its own network, its own lifecycle. They can't talk to each other.
That's the problem this post solves. We'll put them in one file, on one network, and CloudBeaver can finally reach PostgreSQL. No custom network commands. No --network flags. Just one docker compose up -d.
Once you've got two services talking, we'll scale up to four. Here's the full Nextcloud stack with MariaDB, Redis, PHP-FPM, and nginx all in one file.
By the end of this post, you'll have:
- CloudBeaver + PostgreSQL connected in one compose file
- A four-service Nextcloud stack on a shared network
โ Prerequisites
-
Ep 1-7 completed. You know Compose basics like single service per file,
.envfiles, and theup/ps/logs/downworkflow.
๐ฆ The Problem: Two Compose Files, Two Networks
Last time you ended up with PostgreSQL in one directory and CloudBeaver in another:
~/
โโโ dtstack-pg/
โ โโโ docker-compose.yml
โ โโโ .env
โโโ dtstack-cb/
โโโ docker-compose.yml
โโโ ...
Each project gets its own network. PostgreSQL is on dtstack-pg_default, CloudBeaver is on dtstack-cb_default. They can't reach each other. You can't connect CloudBeaver to the database.
That's what multi-service compose fixes. One file, one network, both services talking.
๐ง Step 1: CloudBeaver + PostgreSQL in One File
Create a single directory for your stack:
$ mkdir -p cloudstack && cd cloudstack
Create docker-compose.yml:
services:
postgres:
container_name: dtstack-pg
image: postgres:17
environment:
POSTGRES_PASSWORD: ${PG_PASSWORD}
POSTGRES_DB: ${PG_DATABASE}
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- dtstack
cloudbeaver:
container_name: dtstack-cb
image: dbeaver/cloudbeaver:latest
ports:
- "8978:8978"
volumes:
- cbdata:/opt/cloudbeaver/workspace
networks:
- dtstack
volumes:
pgdata:
cbdata:
networks:
dtstack:
driver: bridge
Two services. One networks: block. Both on the same dtstack network.
Create .env:
$ cat > .env << EOF
PG_PASSWORD=docker
PG_DATABASE=testdb
EOF
Two things to notice:
No
portson PostgreSQL. CloudBeaver reaches it on the internal network, so there's no need to expose port 5432 to the host. Only CloudBeaver needs a port mapping since it's the one you access from your browser.Each service lists
networks: - dtstack. This explicitly connects them to the shared bridge network. Compose would create a default network and connect them automatically, but declaring it explicitly makes the intent clear.
๐ Start the Stack
$ docker compose up -d
[+] up 32/32
โ Image postgres:17 Pulled
โ Image dbeaver/cloudbeaver:latest Pulled
โ Network cloudstack_dtstack Created
โ Volume cloudstack_pgdata Created
โ Volume cloudstack_cbdata Created
โ Container dtstack-cb Started
โ Container dtstack-pg Started
One command. Seven things done (two images pulled, network, two volumes, two containers). Everything connected.
Verify both services are running:
$ docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
dtstack-cb dbeaver/cloudbeaver:latest "./launch-product.sh" cloudbeaver 5 min ago Up 5 min 0.0.0.0:8978->8978/tcp
dtstack-pg postgres:17 "docker-entrypoint.sโฆ" postgres 5 min ago Up 5 min 5432/tcp
Only CloudBeaver has a port mapping. PostgreSQL is on the dtstack network but invisible to the host. That's exactly what we want.
๐ Verify Connectivity
Open http://localhost:8978 in your browser. CloudBeaver loads.
Now add PostgreSQL as a connection in CloudBeaver:
-
Host:
postgres(the service name, not an IP address) -
Port:
5432 -
Database:
testdb(from your.env) -
Username:
postgres -
Password:
docker(from your.env)
Connect. It works. CloudBeaver reaches PostgreSQL by service name. No IP addresses. No docker network connect. It just works.
๐ Inspect the Network
See what Compose created:
$ docker network inspect cloudstack_dtstack
Look at the Containers section. Both services are listed with their IP addresses:
"Containers": {
"abc123...": {
"Name": "dtstack-pg",
"IPv4Address": "172.19.0.2/16"
},
"def456...": {
"Name": "dtstack-cb",
"IPv4Address": "172.19.0.3/16"
}
}
Two containers. One network. Use the service name (postgres, cloudbeaver) when connecting services to each other โ not the container name.
๐ Tear It Down
$ docker compose down
[+] down 3/3
โ Container dtstack-cb Removed
โ Container dtstack-pg Removed
โ Network cloudstack_dtstack Removed
Two containers gone. Network gone. Volumes survive.
Volumes are preserved by default. Check:
$ docker volume ls | grep cloudstack
local cloudstack_pgdata
local cloudstack_cbdata
Start the stack again and your database is still there:
$ docker compose up -d
To remove everything including volumes:
$ docker compose down --volumes
[+] down 5/5
โ Container dtstack-pg Removed
โ Container dtstack-cb Removed
โ Volume cloudstack_cbdata Removed
โ Volume cloudstack_pgdata Removed
โ Network cloudstack_dtstack Removed
๐ฆ Step 2: Scale Up to a Four-Service Nextcloud Stack
Now let's go bigger. Four services in one file, all talking to each other.
Create a single directory for your Nextcloud stack:
$ mkdir -p nextcloud && cd nextcloud
Create docker-compose.yml:
services:
db:
container_name: nc-db
image: mariadb:11
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- dbdata:/var/lib/mysql
networks:
- nextcloud
redis:
container_name: nc-redis
image: redis:8.6
volumes:
- redisdata:/data
networks:
- nextcloud
php:
container_name: nc-php
image: nextcloud:fpm
volumes:
- ./html:/var/www/html
networks:
- nextcloud
nginx:
container_name: nc-nginx
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./html:/var/www/html
- ./nginx.conf:/etc/nginx/conf.d/default.conf
networks:
- nextcloud
volumes:
dbdata:
redisdata:
networks:
nextcloud:
driver: bridge
Four services. One networks: block. All of them on the same nextcloud network.
Create .env:
$ cat > .env << EOF
MYSQL_ROOT_PASSWORD=nextcloud
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
MYSQL_PASSWORD=nextcloud
EOF
Create nginx.conf:
server {
listen 80;
server_name localhost;
root /var/www/html;
index index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass php:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Three things to notice:
Volumes are shared.
dbdataandredisdataare defined once at the bottom and used by the services that need them.PHP-FPM and nginx both mount
./html. Both services mount the same host directory at the same container path:/var/www/html. When Nextcloud writes an uploaded file to/var/www/html/data/user1/photo.jpginside the PHP-FPM container, nginx can immediately serve it from the same path. No copying, no syncing, just one shared directory.Nginx needs a config to talk to PHP-FPM. The
nginx.conffile tells nginx: when you see a.phprequest, don't serve the raw file. Forward it to thephpservice on port 9000 via FastCGI. Without this, your browser would downloadindex.phpinstead of running it.
๐ Start the Nextcloud Stack
$ docker compose up -d
[+] up 19/19
โ Image nextcloud:fpm Pulled
โ Image nginx:latest Pulled
โ Image mariadb:11 Pulled
โ Image redis:8.6 Pulled
โ Network nextcloud_nextcloud Created
โ Volume nextcloud_dbdata Created
โ Volume nextcloud_redisdata Created
โ Container nc-nginx Started
โ Container nc-db Started
โ Container nc-redis Started
โ Container nc-php Started
One command. Eleven things done (four images pulled, network, two volumes, four containers). Everything connected.
Verify all services are running:
$ docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
nc-db mariadb:11 "docker-entrypoint.sโฆ" db 5 min ago Up 5 min 3306/tcp
nc-redis redis:8.6 "docker-entrypoint.sโฆ" redis 5 min ago Up 5 min 6379/tcp
nc-php nextcloud:fpm "docker-entrypoint.sโฆ" php 5 min ago Up 5 min 9000/tcp
nc-nginx nginx:latest "/docker-entrypoint.โฆ" nginx 5 min ago Up 5 min 0.0.0.0:8080->80/tcp
Only nc-nginx has a port mapping. The other three services are on the nextcloud network but invisible to the host. That's exactly what we want.
๐ฅ๏ธ Use Nextcloud
Open http://localhost:8080. The Nextcloud setup page loads.
Create an admin account, then fill in the database section:
- Database type: MariaDB
-
Database user:
nextcloud -
Database password:
nextcloud -
Database name:
nextcloud -
Database host:
nc-db
Hit Finish setup. Nextcloud initializes, connects to MariaDB, and drops you into the dashboard.
Upload a file. Create a folder. It works. All four services are talking to each other through that single compose file.
Check the html/ directory on the host. The nextcloud:fpm image populated it on first start:
$ ls html/
You'll see Nextcloud's file structure like index.php, core/, apps/, config/, and more. Nginx is serving from this same directory, so your static files and PHP requests all come from the same source.
๐ Tear Down the Nextcloud Stack
To stop and remove containers, volumes, and the network:
$ docker compose down --volumes
--volumes removes named volumes (dbdata, redisdata) but not bind-mounted directories. The html/ directory on your host stays untouched. Remove it manually if you want a clean slate:
$ rm -rf html/
Start again and you'll get a fresh Nextcloud setup.
๐งช Exercise 1: Producer, Queue, and Worker
In a real production system, you often have long-running tasks that shouldn't block a web request. The solution is a job queue: the web server adds a job, a separate worker picks it up and processes it.
Create a directory and save these two scripts:
producer.py (adds jobs to Redis):
from http.server import HTTPServer, BaseHTTPRequestHandler
import urllib.parse, redis
r = redis.Redis(host='redis', port=6379, decode_responses=True)
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
count = r.llen('jobs')
self.send_response(200)
self.send_header("Content-Type", "text/html")
self.end_headers()
self.wfile.write(f"<h2>Job Queue</h2><p>{count} jobs in queue</p><form method='post'><input name='job' placeholder='Enter job name'><button>Submit</button></form>".encode())
def do_POST(self):
length = int(self.headers.get("Content-Length", 0))
job = urllib.parse.parse_qs(self.rfile.read(length).decode())["job"][0]
r.lpush('jobs', job)
self.send_response(302)
self.send_header("Location", "/")
self.end_headers()
def log_message(self, format, *args):
pass
HTTPServer(("0.0.0.0", 5000), Handler).serve_forever()
worker.py (processes jobs from Redis):
import redis, time, random
r = redis.Redis(host='redis', port=6379, decode_responses=True)
print("Worker ready. Waiting for jobs...")
while True:
job = r.brpop('jobs', timeout=0)
if job:
_, task = job
print(f"Processing {task}...")
time.sleep(random.randint(30, 60))
print(f"Completed {task}")
Your job: Write the compose file to connect all three services.
Hints:
- Three services: redis, producer, worker
- All three need to be on the same network
- Use
python:slimfor both producer and worker - Mount
producer.pyandworker.pyinto their respective containers - Producer needs port 5000 exposed
- Worker uses
brpopwhich blocks until a job is available
๐ฆ Exercise 1 Solution
Create a directory and save all three files inside it:
$ mkdir -p prodwork && cd prodwork
docker-compose.yml:
services:
redis:
image: redis:latest
producer:
image: python:slim
command: sh -c "pip install redis && python -u /app/producer.py"
ports:
- "5000:5000"
volumes:
- ./producer.py:/app/producer.py
working_dir: /app
worker:
image: python:slim
command: sh -c "pip install redis && python -u /app/worker.py"
volumes:
- ./worker.py:/app/worker.py
working_dir: /app
The -u flag forces unbuffered output. Without it, Python buffers print() when there's no terminal, and you won't see worker logs in real time.
No networks: block in the compose file. Compose creates a default network named prodwork_default and connects all three services automatically. You can verify:
$ docker network ls | grep prodwork
80e4fc2182b5 prodwork_default bridge local
Then start:
$ docker compose up -d
Watch the worker and start submitting jobs:
$ docker compose logs -f worker
worker | Worker ready. Waiting for jobs...
Open http://localhost:5000 in your browser, submit a job, and every 30-60 seconds you'll see the worker process it:
worker | Processing "Generate monthly report"...
worker | Completed Generate monthly report
The web server never blocked. The job queue handled the delay.
๐งช Exercise 2: Build a Load Balanced App
Here's a Python app that shows its hostname and a random background color. We'll put nginx in front of it. Writing the compose file is your job.
app.py (stdlib, no external dependencies):
from http.server import HTTPServer, BaseHTTPRequestHandler
import socket, random, time
colors = [
"#e74c3c", "#c0392b", "#8e44ad", "#2c3e50", "#2980b9",
"#16a085", "#27ae60", "#d35400", "#f39c12", "#2d3436",
]
color = random.choice(colors)
requests = 0
started = time.time()
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
global requests
if self.path == "/favicon.ico":
self.send_response(204)
self.end_headers()
return
requests += 1
self.send_response(200)
self.send_header("Content-Type", "text/html")
self.end_headers()
hostname = socket.gethostname()
ip = socket.gethostbyname(hostname)
uptime = int(time.time() - started)
html = f"""<html><body style="background:{color};font-family:monospace;text-align:center;padding-top:10%">
<h1 style="font-size:4em;color:white">{hostname}</h1>
<p style="font-size:1.8em;color:white">{ip}</p>
<p style="font-size:1.4em;color:white">Requests: {requests} | Uptime: {uptime}s</p>
</body></html>"""
self.wfile.write(html.encode())
def log_message(self, format, *args):
pass
HTTPServer(("0.0.0.0", 5000), Handler).serve_forever()
nginx.conf (load balance across upstream instances):
upstream backend {
server web:5000;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
Hints for the compose file:
- Two services: nginx and web (python:slim)
- Nginx needs port 8080 mapped to 80
- Nginx mounts
nginx.conf - Web mounts
app.py, runs on port 5000 internally (no host port needed) - Both on the same network (or just let Compose create the default)
๐ฆ Exercise 2 Solution
Create a directory and save all three files inside it (docker-compose.yml, app.py, nginx.conf):
$ mkdir -p loadbalance && cd loadbalance
docker-compose.yml:
services:
nginx:
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
web:
image: python:slim
command: python /app/app.py
volumes:
- ./app.py:/app/app.py
working_dir: /app
$ docker compose up -d
Open http://localhost:8080. You'll see a random color with the container hostname. Refresh. Same result for now.
Next post we'll put this under pressure.
๐ What You've Built
| Feature | What It Does |
|---|---|
| One file, two services | CloudBeaver + PostgreSQL on the same network |
| One file, four services | MariaDB, Redis, PHP-FPM, nginx, all in one place |
| Shared volume mounts | PHP-FPM and nginx mount ./html at the same /var/www/html path |
| Nginx + FastCGI |
nginx.conf proxies PHP requests to PHP-FPM on port 9000 |
| No unnecessary ports | Only the web-facing service is exposed, database and cache stay internal |
.env for secrets |
Passwords live in a file, not in YAML |
| Redis job queue | Producer, worker, and queue. Three services, one compose file |
| Load balanced app | nginx + Python web service, ready for scaling |
๐ Coming up: You've got services talking to each other. But what happens when the worker crashes, or the queue suddenly has a hundred jobs? How do you build a stack that holds up?
Found this helpful? ๐
- LinkedIn: Share with your network
- Twitter: Tweet about it
- Questions? Drop a comment below or reach out on LinkedIn

Top comments (0)