During a recent preparation to hands-on lab we ran into a surprising limit: AWS Lightsail Container Service only allows 4 custom domains per account. We needed a separate public instance of our application for each lab participant (we predicted 11 participants), so this quota was a deal-breaker for the container service approach. And we discovered that like one day before the actual lab starts, so we had to act quickly.
This post describes the practical solution we adopted: run a single Lightsail (Ubuntu) instance, host multiple app containers there with docker-compose, and expose each container under its own subdomain via a Cloudflare Tunnel. The result: fast deployment, one public IP-less server, and stable per-user URLs.
TL;DR
- Problem: Lightsail Container Service limits custom domains to 4 → can't create per-user public URLs.
- Solution: Use a Lightsail Instance (Ubuntu) + docker-compose to run multiple containers and Cloudflare Tunnel to map subdomains to localhost ports.
- Result: 11 public subdomains served from one instance without exposing ports or a fixed public IP in DNS.
Why This Architecture?
- Running a Lightsail Instance Virtual Machine gives full control and no custom-domain quota.
- docker-compose is simple to orchestrate many containers on one host.
- Cloudflare Tunnel provides secure inbound connectivity without opening firewall ports or relying on a static IP: cloudflared accepts connections for specified hostnames and forwards traffic to local services.
Architecture (simple view):
app1.lab.work → cloudflared (tunnel) → localhost:5001 → our-app-1 (container)
app2.lab.work → cloudflared (tunnel) → localhost:5002 → our-app-2 (container)
...
app11.lab.work → cloudflared → localhost:5011 → our-app-11
Example docker-compose
This is the lightweight pattern we used. Each service runs the same application image but uses a distinct container name and public-mapped port.
version: "3.8"
services:
our-app-1:
build: ./our-app
container_name: our-app-1
ports:
- "5001:5001"
environment:
- PORT=5001
restart: unless-stopped
our-app-2:
build: ./our-app
container_name: our-app-2
ports:
- "5002:5001"
environment:
- PORT=5001
restart: unless-stopped
# ...
our-app-11:
build: ./our-app
container_name: our-app-11
ports:
- "5011:5001"
environment:
- PORT=5001
restart: unless-stopped
Notes:
- The containers all expose the application on the container's internal port 5001, but we map distinct host ports (5001..5011) so cloudflared can forward to the correct service.
- If your app can read PORT from env, adjust container start command so it binds to the correct port (we used the same internal port and mapped different external ports).
deploy.sh (what we actually used)
Below is the deploy script we executed on the Lightsail instance (/home/ubuntu/lab). The script installs Docker and cloudflared, builds the containers, writes a Cloudflare tunnel config, and starts the cloudflared service. Important: the script writes a config that references /etc/cloudflared/lab.json (the tunnel credentials file). Creating that credentials file requires a one-time cloudflared "tunnel create" step which is explained further below.
#!/bin/bash
set -e
echo "[1/5] Installing Docker & Cloudflared..."
sudo apt update
sudo apt install -y docker.io docker-compose wget
# Install cloudflared
wget -O cloudflared.deb https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
sudo dpkg -i cloudflared.deb || sudo apt --fix-broken install -y
echo "[2/5] Building 11 app containers..."
cd /home/ubuntu/lab
sudo docker-compose build
echo "[3/5] Starting containers..."
sudo docker-compose up -d
echo "[4/5] Setting up Cloudflare tunnel config..."
sudo mkdir -p /etc/cloudflared
sudo tee /etc/cloudflared/config.yml > /dev/null <<EOF
tunnel: lab
credentials-file: /etc/cloudflared/lab.json
ingress:
- hostname: app1.lab.work
service: http://localhost:5001
- hostname: app2.lab.work
service: http://localhost:5002
- hostname: app3.lab.work
service: http://localhost:5003
- hostname: app4.lab.work
service: http://localhost:5004
- hostname: app5.lab.work
service: http://localhost:5005
- hostname: app6.lab.work
service: http://localhost:5006
- hostname: app7.lab.work
service: http://localhost:5007
- hostname: app8.lab.work
service: http://localhost:5008
- hostname: app9.lab.work
service: http://localhost:5009
- hostname: app10.lab.work
service: http://localhost:5010
- hostname: app11.lab.work
service: http://localhost:5011
- service: http_status:404
EOF
echo "[5/5] Starting Cloudflare tunnel..."
sudo systemctl daemon-reload
sudo systemctl enable cloudflared
sudo systemctl restart cloudflared
echo "=============================="
echo " Your our apps are ready!"
echo "=============================="
echo ""
echo "Public URLs:"
for i in {1..11}
do
echo "app$i → https://app$i.lab.work"
done
Important: the credentials-file: /etc/cloudflared/lab.json value must point to an actual tunnel credentials JSON file created when you run cloudflared tunnel create lab (covered next). If no credentials-file is present, cloudflared service will fail to start.
One-time Cloudflare Tunnel Steps (interactive)
- Install
cloudflaredlocally or on the server (we installed on server in the script). - Login and create the tunnel (this requires access to the Cloudflare account for the lab.work zone):
- Run:
cloudflared login- This opens a browser to authenticate your Cloudflare account and grants the cloudflared instance permissions to make DNS records.
- Then create the named tunnel:
cloudflared tunnel create lab- This creates a credentials file at
~/.cloudflared/<TUNNEL-ID>.json.
- Copy the credentials file to /etc/cloudflared/lab.json:
sudo mkdir -p /etc/cloudflaredsudo cp ~/.cloudflared/<TUNNEL-ID>.json /etc/cloudflared/lab.json
- Run:
- Create DNS routes for each hostname (you can also create DNS records in the Cloudflare dashboard):
- For each hostname run:
cloudflared tunnel route dns lab app1.lab.workcloudflared tunnel route dns lab app2.lab.work- ...
- This creates Cloudflare DNS records that point the hostnames to the tunnel. You should see them as proxied A/CNAME records in Cloudflare DNS.
- For each hostname run:
- Start the tunnel (the deploy script starts the cloudflared system service):
- Check logs:
sudo journalctl -u cloudflared -f - Test: curl or open https://app1.lab.work
- Check logs:
Notes:
- If you prefer to manage DNS manually in the Cloudflare UI, create proxied A or CNAME records for each subdomain pointing to the tunnel as instructed by the Cloudflare docs. Using
cloudflared tunnel route dnsis convenient because it automates DNS creation. - The login + tunnel creation step is interactive. For fully automated setups, Cloudflare API tokens and non-interactive flows can be used, but that requires careful management of keys.
Caveats, Security & Gotchas
- The Lightsail Instance is a single point of failure. Please note that this setup was created specifically only for lab/trainings. For higher availability, run multiple instances behind a load balancer or use Kubernetes/ECS.
- The tunnel credentials JSON must be stored securely (we place it in /etc/cloudflared and restrict file permissions).
- Rate-limits / fair-use: Cloudflare Tunnel is intended for this type of workload, but verify any usage limits for your plan, and ensure you don’t hit Cloudflare or upstream rate-limits during peak lab use.
- DNS propagation: After
tunnel route dnsthere can be short DNS propagation delays. - Exposed ports: We never open inbound ports on the VM for the apps — cloudflared handles inbound traffic via the tunnel. You should still harden SSH and instance access.
- Logs & debugging: check
sudo journalctl -u cloudflared -fanddocker logs <container>when things misbehave. - TLS: Cloudflare provides TLS for your hostnames automatically when using the tunnel and proxied DNS records.
Improvements & Alternatives
- Traefik or nginx as an internal reverse-proxy: run one proxy on host that routes subdomains to containers, then point Cloudflare Tunnel to the proxy (single local port). This reduces the number of host ports and centralizes routing logic (and makes scaling containers more flexible).
- Use a wildcard subdomain and generate DNS records programmatically if many labs are expected; Cloudflare supports automation via API tokens.
- If you need thousands of per-user instances, move to container orchestration (k8s, ECS) and either use an Ingress controller + external LB or scale with multi-instance / autoscaling groups.
- Consider ephemeral user environments that spin up and down on-demand to save cost.
- Monitoring and health checks: add health endpoints and simple monitors to restart broken containers automatically.
Cost & operational note
- Lightsail Instances are cheap and predictable - Instance with 2 GB Memory, 2 vCPUs, and 3 TB Transfer was enough for us (it costs 12$/month)
- Cloudflare Tunnel is free for many small-use cases, we had a domain there but we did not pay anything for our two-days heavy usage.
Conclusion
Switching from Lightsail Container Service to a Lightsail Instance + docker-compose + Cloudflare Tunnel gave us:
- Fast, repeatable lab deployments on a single server
- Individual public subdomains per participant
- A secure inbound setup without opening host ports
- ..And it really saved us and we did not have to cancel the event in the last minute. :)
Top comments (0)