Let’s be honest: managed cloud container services like AWS ECS or Google Kubernetes Engine (GKE) are incredibly convenient. But when your application starts to scale, the bandwidth and compute costs associated with those managed platforms can quickly spiral out of control.
This is exactly why so many engineering teams are migrating their container infrastructure back to bare metal servers.
By leveraging dedicated servers with full root access, you get 100% of the CPU and RAM you pay for, zero "noisy neighbors," and the freedom to architect your environment exactly how you want it.
In this guide, we are going to build a production-ready container environment from scratch. We will set up a secure Private Docker Registry to host your custom images, and then deploy a Docker Swarm cluster to run them—all hosted on high-performance Ubuntu dedicated servers.
Let’s get into the command line. 💻
🤔 Why Host Your Own Registry and Swarm?
Before we start typing commands, it helps to understand the architecture. Why separate the registry from the cluster?
- Security & Control: Public registries are great for open-source, but proprietary code belongs on hardware you control. A private registry on a dedicated server ensures your intellectual property never leaves your private network.
- Lightning-Fast Deployments: Pulling container images over a local, private Gigabit network (like the internal networks provided with BytesRack servers) is vastly faster than pulling them over the public internet.
- No Vendor Lock-in: Docker Swarm is built natively into Docker. It is drastically simpler to manage than Kubernetes, requires less overhead, and runs brilliantly on bare metal.
🛠️ Prerequisites: What You Will Need
To follow this tutorial, you need the following infrastructure. (If you don't have this yet, a robust BytesRack Dedicated Server is the perfect starting point).
- Server 1 (The Registry Node): An Ubuntu 22.04 or 24.04 server. Needs decent storage space (NVMe preferred) to store your container images.
- Server 2 & 3 (The Swarm Nodes): Two Ubuntu servers to act as your manager and worker nodes.
-
Full Root Access: You need
sudoor root privileges on all machines. - Private Networking: Ideally, these servers should be able to communicate via private IPs to keep traffic secure and fast.
Phase 1: Setting Up the Private Docker Registry
Your private registry is exactly what it sounds like—a secure vault for your Docker images. We will deploy the official registry:2 image, but we are going to do it the right way: with basic authentication and TLS (SSL) to ensure it is secure.
Step 1: Install Docker
Run this on your Registry Node (and eventually your Swarm nodes):
# Update your package index
sudo apt-get update
# Install Docker's official GPG key and repository, then install Docker
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL [https://download.docker.com/linux/ubuntu/gpg](https://download.docker.com/linux/ubuntu/gpg) | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] [https://download.docker.com/linux/ubuntu](https://download.docker.com/linux/ubuntu) \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
Step 2: Configure Authentication (htpasswd)
You do not want anyone on the internet pulling your private images. We will use htpasswd to create a username and password.
# Install apache2-utils for the htpasswd command
sudo apt-get install -y apache2-utils
# Create a directory to store your registry data and passwords
mkdir -p /opt/registry/auth
# Create a user (replace 'admin' with your preferred username)
# You will be prompted to type a password.
htpasswd -Bc /opt/registry/auth/htpasswd admin
Step 3: Start the Registry Container
Note: For a true production environment, you should put this registry behind an Nginx reverse proxy with a Let's Encrypt SSL certificate. For the sake of this tutorial's length, we are assuming you are running this over a secure, private internal network.
Let's spin up the registry, binding it to port 5000 and mounting our authentication file:
docker run -d \
-p 5000:5000 \
--restart=always \
--name my-private-registry \
-v /opt/registry/auth:/auth \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v /opt/registry/data:/var/lib/registry \
registry:2
Your registry is now live and waiting for images! 🎉
Phase 2: Initializing Docker Swarm on Bare Metal
Now that we have a place to store our code, let’s build the compute engine. Docker Swarm turns a pool of dedicated servers into a single, cohesive virtual host.
Step 1: Initialize the Swarm Manager
Log into Server 2 (your designated Manager node). Make sure Docker is installed (use the same installation commands from Phase 1).
To start the cluster, you need to tell Swarm which IP address to advertise to the other servers. Use your server's private IP to keep cluster management traffic off the public internet.
# Replace <PRIVATE_IP> with your manager server's internal IP address
docker swarm init --advertise-addr <PRIVATE_IP>
When this command completes, the terminal will output a docker swarm join command containing a secure token. Copy this token. It is the key for other servers to join the cluster.
Step 2: Join the Worker Node
Log into Server 3 (your designated Worker node). Make sure Docker is installed. Paste the command you copied from the Manager node:
docker swarm join --token SWMTKN-1-xxxxxxxxxxxxxxxxxxxx <MANAGER_PRIVATE_IP>:2377
You should see a message saying: This node joined a swarm as a worker.
To verify your cluster is healthy, go back to your Manager node and run:
docker node ls
You will see a list of your bare metal servers acting as a unified cluster.
Phase 3: Connecting the Swarm to Your Private Registry
Here is where many sysadmins get stuck. Your Swarm cluster needs permission to pull images from the private registry we built in Phase 1.
Step 1: Authenticate the Swarm Nodes
On every node in your Swarm (both Manager and Worker), you need to log into the private registry using the credentials you created earlier.
# Replace with the IP or domain of your Registry server
docker login <REGISTRY_SERVER_IP>:5000
🔥 Pro-tip for bare metal: If you didn't set up TLS (SSL) on your registry and are using internal IPs, Docker will block the connection by default. You must edit /etc/docker/daemon.json on all Swarm nodes to allow the insecure internal registry:
{
"insecure-registries" : ["<REGISTRY_SERVER_IP>:5000"]
}
Restart Docker (sudo systemctl restart docker) after adding this.
Step 2: Push an Image to Your Registry
Let’s test the plumbing. On any machine, pull a standard Nginx image, tag it for your private registry, and push it.
# Pull standard nginx
docker pull nginx:latest
# Tag it to point to your private registry
docker tag nginx:latest <REGISTRY_SERVER_IP>:5000/my-custom-nginx:v1
# Push it to your dedicated registry server
docker push <REGISTRY_SERVER_IP>:5000/my-custom-nginx:v1
Step 3: Deploy a Swarm Service from the Private Registry
Now for the grand finale. Let's tell Docker Swarm to deploy a highly available service using the image we just pushed to our private vault.
Run this on your Manager Node:
docker service create \
--name web-app \
--replicas 3 \
--publish published=8080,target=80 \
--with-registry-auth \
<REGISTRY_SERVER_IP>:5000/my-custom-nginx:v1
Why --with-registry-auth is critical: This flag tells the Swarm manager to pass the registry login tokens down to the worker nodes. Without this flag, the worker nodes will be denied access when they try to pull the image, and your deployment will fail.
You can check the status of your deployment by running docker service ps web-app.
The Bare Metal Advantage 🚀
Congratulations! You have just built a robust, self-hosted container infrastructure.
By deploying your Private Docker Registry and Docker Swarm cluster on dedicated servers, you have bypassed the heavy API restrictions, egress data fees, and shared-resource bottlenecks of traditional cloud providers. You own the data layer, and you control the compute layer.
Because this architecture requires modifying system-level configurations (like daemon.json and firewall rules for port 2377), full root access is strictly required.
If you are looking for the perfect hardware to host your new Swarm cluster, BytesRack offers enterprise-grade dedicated servers with the raw compute power, fast NVMe storage, and unrestricted root access required to run container workloads at scale.
Ready to scale without the cloud tax? Check out our high-performance dedicated server configurations today.
Top comments (0)