This tutorial walks you through setting up a simple Docker Compose project that serves two Node web servers over HTTPS using Caddy as a reverse proxy. You will learn how to use mkcert to generate wildcard certificates and the minimal configuration needed in the Caddyfile and docker-compose.yml to get it all working.
This tutorial was written for Linux or WSL users (I'm running Ubuntu).
Introduction
When running web services locally with Docker Compose, it’s easy to default to plain HTTP for simplicity. But doing so creates a gap between your development setup and production, where HTTPS is almost always used. This gap matters because of development/production parity: the closer your environments match, the fewer surprises you’ll hit when deploying.
Using HTTPS locally helps you catch issues early, like secure cookies not being set, browser features being blocked in non-secure contexts, or mixed-content errors. By adding a reverse proxy and self-signed certificates to your Compose setup, you mirror real-world conditions more closely and avoid the classic “well, it worked on my machine...” problem.
First Steps
- Make sure you have the following required packages installed
-
Docker and Docker Compose (naturally). (Install Docker Engine)
- It's worthwhile to check if docker and docker compose are up to date. As I'm writing this tutorial, Docker is currently at version
v29.4.1, and docker compose atv5.1.2. - Run
docker -vanddocker compose --versionto check your versions - If you're out of date by a major version for either, consider following the intructions for uninstalling old versions at https://docs.docker.com/engine/install/ubuntu/#uninstall-old-versions and re-installing docker.
- It's worthwhile to check if docker and docker compose are up to date. As I'm writing this tutorial, Docker is currently at version
-
mkcert
- Follow the instructions at https://github.com/FiloSottile/mkcert#installation to install mkcert for your environment
- After mkcert is installed, run "mkcert -install". This is a one-time setup step that enables your computer to trust the locally generated SSL certificates created by mkcert. The docker compose project described in this tutorial will not work otherwise.
- Run the following commands in whatever directory you keep your coding projects to clone the tutorial repo and cd into the directory:
git clone https://github.com/moofoo/compose-caddy-tutorial.git
cd compose-caddy-tutorial
You should see the following directory structure:
├── Caddyfile
├── Dockerfile
├── README.md
├── docker-compose.yml
├── apps
│ ├── admin.js
│ └── www.js
└── scripts
├── caddyfile.sh
└── certs.sh
Update /etc/hosts
Did you run mkcert -install after installing mkcert? If you haven't, do that now.
For this tutorial, our docker compose project will be serving two extremely simple node web servers at https://www.caddy-test.local and https://admin.caddy-test.local. We will be generating wildcard certificates to manage the two subdomains (www and admin).
Before we get to the Caddy/Compose configuration, the first thing we're going to do is update the /etc/hosts file to map those domains to a local loopback ip address
The available loopback address range you have for local usage is 127.0.0.1 to 127.255.255.255, but I've found that below 127.0.0.10 can sometimes be hit or miss on availability, so we're going to use 127.0.0.11 for this tutorial.
Personally, I think using unique addresses for docker compose projects is good practice, since if you just use 127.0.0.1/0.0.0.0/localhost for every project it can be a real pain if you suddenly find yourself needing to have multiple compose projects running at once for whatever reason.
At any rate,
- Open your the hosts file at /etc/hosts and add these lines at the bottom:
127.0.0.11 www.caddy-test.local
127.0.0.11 admin.caddy-test.local
Creating certificates for Caddy
As mentioned, for this project we're going to create "wildcard" certificates. Wildcard SSL/TLS certificates are certificates that secure an entire domain and all of its first-level subdomains with a single certificate. For example, *.example.com covers api.example.com, www.example.com, and admin.example.com.
Running "mkcert --help" tells us the syntax to create wildcard certificates is:
$ mkcert "*.example.it"
Generate "_wildcard.example.it.pem" and "_wildcard.example.it-key.pem".
Knowing that,
- Run the following individual commands (or use the helper ./scripts/certs.sh script mentioned afterwards) to generate wildcard certificates for "*.caddy-test.local" in the ./certs directory
mkcert "*.caddy-test.local"
mkdir -p ./certs
mv ./_wildcard.caddy-test.local.pem ./certs
mv ./_wildcard.caddy-test.local-key.pem ./certs
The tutorial repo has a helper bash script scripts/certs.sh, which takes a domain as an argument and performs the above commands. To create certificates in ./certs like the commands would do above, call it like bash scripts/certs.sh caddy-test.local from the project's root directory.
The Caddy configuration file
Here is the Caddyfile the caddy service will use:
(tls) {
tls /etc/caddy/certs/_wildcard.caddy-test.local.pem /etc/caddy/certs/_wildcard.caddy-test.local-key.pem
}
www.caddy-test.local{
import tls
reverse_proxy www:3000
}
admin.caddy-test.local{
import tls
reverse_proxy admin:3001
}
Let's go over each block
(tls) {
tls /etc/caddy/certs/_wildcard.caddy-test.local.pem /etc/caddy/certs/_wildcard.caddy-test.local-key.pem
}
This snippet configures tls to use the wildcard certificates we created earlier. The caddy docker compose service config, which we'll get to in a bit, will use a bind mount volume to make our local ./certs directory available in the container at path /etc/caddy/certs.
www.caddy-test.local {
import tls
reverse_proxy www:3000
}
This site block configures the reverse proxy for https://www.caddy-test.local.
- The
import tlsline copies the previously definedtlssnippet into the block. -
reverse_proxy www:3000creates a reverse proxy for hostwwwand port3000.-
www:3000is the service/host name and port of the node service within the docker compose bridge network.
-
- In other words, this site block makes it so requests to
www.caddy-test.localwill get routed to addresswww:3000, which within the internal docker bridge network belongs to a service running a Node web server that accepts requests on port 3000.
The site block for admin.caddy-test.local follows the same pattern, but with admin:3001 for the service/host name and port.
Now, let's take a look at how the services are set up in docker-compose.yml:
Service setup in docker-compose.yml
name: caddy-tutorial
networks:
caddy_tutorial_network:
volumes:
caddy_data:
caddy_config:
services:
www:
networks:
- caddy_tutorial_network
expose:
- 3000
environment:
- PORT=3000
working_dir: /app
command: node www.js
build:
args:
NAME: www
context: .
dockerfile: ./Dockerfile
admin:
networks:
- caddy_tutorial_network
expose:
- 3001
environment:
- PORT=3001
command: node admin.js
working_dir: /app
build:
args:
NAME: admin
context: .
dockerfile: ./Dockerfile
caddy:
image: caddy:2.11.2-alpine
networks:
- caddy_tutorial_network
ports:
- "127.0.0.11:80:80"
- "127.0.0.11:443:443"
depends_on:
- www
- admin
volumes:
- ./certs:/etc/caddy/certs
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
cap_add:
- NET_ADMIN
Let's go over each section, starting from the top:
networks:
caddy_tutorial_network:
This creates a "user defined bridge network", which allows services to communicate with each other by name or alias (versus by ip address). Like admin:3001 for the admin Node service, for example.
volumes:
caddy_data:
caddy_config:
This defines two named volumes which are used by the Caddy service for persisting whatever data/config in its container (the official caddy examples say they're needed).
Now the node services:
www:
networks:
- caddy_tutorial_network
expose:
- 3000
environment:
- PORT=3000
working_dir: /app
command: node www.js
build:
args:
NAME: www
context: .
dockerfile: ./Dockerfile
admin:
networks:
- caddy_tutorial_network
expose:
- 3001
environment:
- PORT=3001
command: node admin.js
working_dir: /app
build:
args:
NAME: admin
context: .
dockerfile: ./Dockerfile
Looking at the config for service www and starting from the top:
networks:
- caddy_tutorial_network
expose:
- 3000
This config connects the service to the user defined bridge network, and exposes it to the other services on the network. The service is therefore visible at address www:3000 to other services running on that internal docker network. Because ports has not been defined, the service cannot be accessed directly by the host (which is because it doesn't need to be).
environment:
- PORT=3000
command: node www.js
working_dir: /app
This config sets the environment variable PORT to 3000 in the running container, and specifies that the command node www.js on path /app should run when the service starts up.
Let's look at ./apps/www.js now:
import http from "http";
const host = "0.0.0.0";
const port = process.env.PORT; // 3000 for service `www`, 3001 for service `admin`, per the service environment config
const requestListener = function (req, res) {
res.writeHead(200);
res.end("User Site");
};
const server = http.createServer(requestListener);
server.listen(port, host, () => {
console.log(`Server is running on http://${host}:${port}`);
});
This very simple server runs at address 0.0.0.0 on the port determined by the environment variable from the service config. 0.0.0.0 isn't a real address, but basically translates to "all IPv4 addresses". The server simply responds to requests with the text 'User Site'.
./apps/admin.js is the exact same code, except it responds with the text 'Admin Site'.
Moving on with the service config,
build:
args:
NAME: www
context: .
dockerfile: ./Dockerfile
This part of the config determines how the container image for the service will be built. The NAME arg with value www is available for the Dockerfile on build, which allows the use of a single Dockerfile for this simple project. Let's look at that Dockerfile now:
FROM node:lts-alpine
WORKDIR /app
ARG NAME
COPY ./apps/$NAME.js /app/$NAME.js
CMD ["node", "$NAME.js"]
So, the container image for the www service would basically be this, with $NAME resolving to www upon build:
FROM node:lts-alpine
WORKDIR /app
COPY ./apps/www.js /app/www.js
CMD ["node", "www.js"]
And likewise for the admin service's image.
Finally, let's go over the the caddy service configuration, which is using the caddy:2.11.2-alpine image:
caddy:
image: caddy:2.11.2-alpine
networks:
- caddy_tutorial_network
ports:
- "127.0.0.11:80:80"
- "127.0.0.11:443:443"
depends_on:
- www
- admin
volumes:
- ./certs:/etc/caddy/certs
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
cap_add:
- NET_ADMIN
Starting from the top:
networks:
- caddy_tutorial_network
ports:
- "127.0.0.11:80:80"
- "127.0.0.11:443:443"
depends_on:
- www
- admin
Like the node services, the Caddy service runs on the caddy_tutorial_network bridge network. But unlike the Node services, it exposes itself to the host at address 127.0.0.11 on ports 80 and 443.
The depends_on config tells the Caddy service to wait until the www and admin services are running before it gets going, since Caddy will freak if the Node servers aren't responsive when it starts up.
volumes:
- ./certs:/etc/caddy/certs
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
The first two volumes bind mount our ./certs directory and the Caddyfile to the appropriate paths in the container. The second two volumes are the named volumes defined at the top of the docker-compose.yml, which Caddy needs for storing whatever.
cap_add:
- NET_ADMIN
This config permits the Caddy service to override buffer limits as needed without requiring manual changes to your linux configuration. Caddy running as a compose service is oddly temperamental about whether it has buffers of the size it thinks it needs. In any case, adding this config will fix that problem when/if caddy complains about buffer sizes.
Alright! Let's run this thing!
Run that thang!
From the project root directory, run
sudo docker compose up --build
(I'm assuming docker must be run by the root user, which can be changed by following the instructions here)
After everything builds and starts up, you should see the text 'User Site' when you open https://www.caddy-test.local in your browser and 'Admin Site' when you open https://admin.caddy-test.local. Huzzah!
Tips
After you've made changes to the caddy service definition, the Caddyfile, and/or your /etc/hosts file, you may find that your project's web services aren't available as expected at the domain(s) you've specified.
Assuming your configuration is actually correct, you can (usually) resolve such docker networking issues / caddy confusion by running:
docker compose down -v --remove-orphans && docker network prune
and then do docker compose up --build when you start the compose project
Top comments (0)