<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Pasha Sviderski</title>
    <description>The latest articles on DEV Community by Pasha Sviderski (@psviderski).</description>
    <link>https://dev.to/psviderski</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/psviderski"/>
    <language>en</language>
    <item>
      <title>How to connect Docker containers across multiple hosts with WireGuard 🌐</title>
      <dc:creator>Pasha Sviderski</dc:creator>
      <pubDate>Tue, 05 Aug 2025 02:04:50 +0000</pubDate>
      <link>https://dev.to/psviderski/how-to-connect-docker-containers-across-multiple-hosts-with-wireguard-2f4a</link>
      <guid>https://dev.to/psviderski/how-to-connect-docker-containers-across-multiple-hosts-with-wireguard-2f4a</guid>
      <description>&lt;p&gt;You want your Docker containers to talk to each other, but they're running on different machines. Perhaps across different cloud providers or mixing cloud with on-prem. The usual approach of mapping services to host ports quickly becomes a pain. Worse, if they're on the public internet, you need to secure every exposed endpoint with TLS and auth.&lt;/p&gt;

&lt;p&gt;What if your containers on different machines could communicate directly without exposing any ports? Using their private Docker IPs, as if they were on the same machine. Here's how you can use pure WireGuard and some clever networking tricks to make this work.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What we're building&lt;/li&gt;
&lt;li&gt;Prequisites&lt;/li&gt;
&lt;li&gt;Step 1: Configure Docker networks&lt;/li&gt;
&lt;li&gt;Step 2: Connect Docker networks with WireGuard&lt;/li&gt;
&lt;li&gt;Step 3: Configure IP routing&lt;/li&gt;
&lt;li&gt;Step 4: Testing&lt;/li&gt;
&lt;li&gt;Step 5: Make the configuration persistent&lt;/li&gt;
&lt;li&gt;Scaling beyond two machines&lt;/li&gt;
&lt;li&gt;Limitations&lt;/li&gt;
&lt;li&gt;Automating with Uncloud&lt;/li&gt;
&lt;li&gt;Alternative solutions&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What we're building
&lt;/h2&gt;

&lt;p&gt;Docker containers are typically connected to a &lt;a href="https://docs.docker.com/engine/network/drivers/bridge/" rel="noopener noreferrer"&gt;bridge network&lt;/a&gt; on their host machine, which allows them to communicate with each other. A bridge network also provides isolation from containers not connected to it and other networks on the host. What we want to achieve is connecting these bridge networks across machines so that containers on different machines can communicate as if they were connected to the same local bridge network.&lt;/p&gt;

&lt;p&gt;The incantation we need is called a site-to-site VPN. Any solution would work. Moreover, if the machines are on the same local network, they're already connected and only lack the appropriate routing configuration. But I'll describe a more versatile approach that works even when the machines are on different continents or behind NAT. WireGuard is the ideal solution for this use case: it's lightweight, &lt;a href="https://www.wireguard.com/performance/" rel="noopener noreferrer"&gt;fast&lt;/a&gt;, simple to configure, provides &lt;a href="https://www.wireguard.com/protocol/" rel="noopener noreferrer"&gt;strong security&lt;/a&gt; and NAT traversal.&lt;/p&gt;

&lt;p&gt;We'll create a new Docker bridge network &lt;code&gt;multi-host&lt;/code&gt; on each machine with unique subnets. Then establish a secure WireGuard tunnel between the machines and configure IP routing so that &lt;code&gt;multi-host&lt;/code&gt; bridge networks become routable via the tunnel. Finally, we'll run containers on each machine connected to the &lt;code&gt;multi-host&lt;/code&gt; network and test that they can communicate with each other using their private IPs.&lt;/p&gt;

&lt;p&gt;I will use these two machines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Machine 1: Debian 12 virtual machine in my homelab network in Australia, which is behind NAT&lt;/li&gt;
&lt;li&gt;Machine 2: Ubuntu 24.04 server from Hetzner in Finland that has a public IP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrtateo9fwcdtit02h96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrtateo9fwcdtit02h96.png" alt="WireGuard overlay network" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Basic knowledge of &lt;a href="https://docs.docker.com/network/" rel="noopener noreferrer"&gt;Docker networking&lt;/a&gt; and &lt;a href="https://www.wireguard.com/" rel="noopener noreferrer"&gt;WireGuard&lt;/a&gt;. If you're new to these topics, you might want to read up on them first.&lt;/li&gt;
&lt;li&gt;At least two Linux machines with root access and Docker installed. They should be on the same network or be able to communicate over the internet.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Configure Docker networks
&lt;/h2&gt;

&lt;p&gt;Most of the commands in this guide require root privileges. You can run them with &lt;code&gt;sudo&lt;/code&gt; or log in as root. I'll start root shells on both machines with &lt;code&gt;sudo -i&lt;/code&gt; for convenience.&lt;/p&gt;

&lt;p&gt;We can't connect the default &lt;a href="https://docs.docker.com/engine/network/drivers/bridge/" rel="noopener noreferrer"&gt;Docker bridge networks&lt;/a&gt; across machines because they use the same subnet (&lt;code&gt;172.17.0.0/16&lt;/code&gt; by default). We need them to have non-overlapping addresses so that we can set up routing between them later.&lt;/p&gt;

&lt;p&gt;Therefore, let's create new Docker bridge networks on each machine with manually specified unique subnets. You can choose any subnets from the &lt;a href="https://en.wikipedia.org/wiki/Private_network#Private_IPv4_addresses" rel="noopener noreferrer"&gt;private IPv4 address ranges&lt;/a&gt; that do not overlap with each other or with your existing networks. I'll use &lt;code&gt;10.200.1.0/24&lt;/code&gt; and &lt;code&gt;10.200.2.0/24&lt;/code&gt; for Machine 1 and Machine 2, respectively. They don't even need to be sequential or be part of the same larger network. However, using a common parent network (like &lt;code&gt;10.200.0.0/16&lt;/code&gt; in my case) can simplify firewall rules and make it easier to manage more machines later.&lt;/p&gt;

&lt;p&gt;You can use any name for the Docker networks. I'll call them &lt;code&gt;multi-host&lt;/code&gt; for clarity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Machine 1&lt;/span&gt;
docker network create &lt;span class="nt"&gt;--subnet&lt;/span&gt; 10.200.1.0/24 &lt;span class="nt"&gt;-o&lt;/span&gt; com.docker.network.bridge.trusted_host_interfaces&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"wg0"&lt;/span&gt; multi-host
&lt;span class="c"&gt;# Machine 2&lt;/span&gt;
docker network create &lt;span class="nt"&gt;--subnet&lt;/span&gt; 10.200.2.0/24 &lt;span class="nt"&gt;-o&lt;/span&gt; com.docker.network.bridge.trusted_host_interfaces&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"wg0"&lt;/span&gt; multi-host
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Starting with Docker 28.2.0 (&lt;a href="https://github.com/moby/moby/pull/49832" rel="noopener noreferrer"&gt;PR&lt;/a&gt;), you have to explicitly specify from which host interfaces you allow &lt;a href="https://docs.docker.com/engine/network/packet-filtering-firewalls/#direct-routing" rel="noopener noreferrer"&gt;direct routing&lt;/a&gt; to containers in bridge networks. This is done by specifying the &lt;code&gt;com.docker.network.bridge.trusted_host_interfaces&lt;/code&gt; option when creating the network. In our case, we want to allow routing via the WireGuard interface &lt;code&gt;wg0&lt;/code&gt; that will be created in the next step.&lt;/p&gt;

&lt;p&gt;Provide this option even if you're using an older Docker version, as it'll be required if you upgrade Docker in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Connect Docker networks with WireGuard
&lt;/h2&gt;

&lt;p&gt;By default, WireGuard uses the UDP port 51820 for communication. To establish a tunnel, at least one of the machines needs to be able to reach the other's port over the internet or local network. Please make sure it's not blocked by a firewall on both machines.&lt;/p&gt;

&lt;p&gt;For example, when using &lt;code&gt;iptables&lt;/code&gt;, you can allow incoming UDP traffic on port 51820 with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;iptables &lt;span class="nt"&gt;-I&lt;/span&gt; INPUT &lt;span class="nt"&gt;-p&lt;/span&gt; udp &lt;span class="nt"&gt;--dport&lt;/span&gt; 51820 &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install WireGuard utilities and generate key pairs on both machines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt &lt;span class="nb"&gt;install &lt;/span&gt;wireguard
&lt;span class="c"&gt;# Change the mode for files created in the shell to 0600&lt;/span&gt;
&lt;span class="nb"&gt;umask &lt;/span&gt;077
&lt;span class="c"&gt;# Create 'privatekey' file containing a new private key&lt;/span&gt;
wg genkey &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; privatekey
&lt;span class="c"&gt;# Create 'publickey' file containing the corresponding public key&lt;/span&gt;
wg pubkey &amp;lt; privatekey &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; publickey
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create WireGuard configuration files using the generated keys.&lt;/p&gt;

&lt;p&gt;On Machine 1, create &lt;code&gt;/etc/wireguard/wg0.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[Interface]&lt;/span&gt;
&lt;span class="py"&gt;ListenPort&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;51820&lt;/span&gt;
&lt;span class="py"&gt;PrivateKey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;replace with 'privatekey' file content from Machine 1&amp;gt;&lt;/span&gt;

&lt;span class="nn"&gt;[Peer]&lt;/span&gt;
&lt;span class="py"&gt;PublicKey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;replace with 'publickey' file content from Machine 2&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;# IP ranges for which a peer will route traffic: Docker subnet on Machine 2
&lt;/span&gt;&lt;span class="py"&gt;AllowedIPs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;10.200.2.0/24&lt;/span&gt;
&lt;span class="c"&gt;# Public IP of Machine 2
&lt;/span&gt;&lt;span class="py"&gt;Endpoint&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;157.180.72.195:51820&lt;/span&gt;
&lt;span class="c"&gt;# Periodically send keepalive packets to keep NAT/firewall mapping alive
&lt;/span&gt;&lt;span class="py"&gt;PersistentKeepalive&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;25&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On Machine 2, create &lt;code&gt;/etc/wireguard/wg0.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[Interface]&lt;/span&gt;
&lt;span class="py"&gt;ListenPort&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;51820&lt;/span&gt;
&lt;span class="py"&gt;PrivateKey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;replace with 'privatekey' file content from Machine 2&amp;gt;&lt;/span&gt;

&lt;span class="nn"&gt;[Peer]&lt;/span&gt;
&lt;span class="py"&gt;PublicKey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;replace with 'publickey' file content from Machine 1&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;# IP ranges for which a peer will route traffic: Docker subnet on Machine 1
&lt;/span&gt;&lt;span class="py"&gt;AllowedIPs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;10.200.1.0/24&lt;/span&gt;
&lt;span class="c"&gt;# Reachable endpoint of Machine 1
# Endpoint =
# Periodically send keepalive packets to keep NAT/firewall mapping alive
&lt;/span&gt;&lt;span class="py"&gt;PersistentKeepalive&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;25&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Refer to the &lt;a href="https://github.com/pirate/wireguard-docs?tab=readme-ov-file#config-reference" rel="noopener noreferrer"&gt;Unofficial WireGuard Documentation&lt;/a&gt; for more details on the configuration options.&lt;/p&gt;

&lt;p&gt;Note that the &lt;code&gt;Endpoint&lt;/code&gt; option could be omitted on one of the machines if the peer is not reachable from that machine. In my case, Machine 1 is behind NAT in my private homelab network which is not reachable from the remote Hetzner server (Machine 2). The bidirectional tunnel can still be established in this case but Machine 1 must initiate the connection.&lt;/p&gt;

&lt;p&gt;If both of your machines are reachable from each other, you should specify the &lt;code&gt;Endpoint&lt;/code&gt; option in both configs which will allow them to establish the connection without waiting for the other side to initiate it. If both of your machines are behind NAT, see &lt;a href="https://github.com/pirate/wireguard-docs#NAT-to-NAT-Connections" rel="noopener noreferrer"&gt;NAT to NAT Connections&lt;/a&gt; for more information.&lt;/p&gt;

&lt;p&gt;Note also that we don't set the &lt;code&gt;Address&lt;/code&gt; option in the configs because we don't want to assign any IP addresses to the WireGuard interfaces. We want the tunnel to only encapsulate and transfer packets from the &lt;code&gt;multi-host&lt;/code&gt; bridge networks and don't want either end of it to be the destination for the packets.&lt;/p&gt;

&lt;p&gt;As the key pairs are now specified in the configuration files, you can remove the &lt;code&gt;privatekey&lt;/code&gt; and &lt;code&gt;publickey&lt;/code&gt; files on both machines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;rm &lt;/span&gt;privatekey publickey
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now start the WireGuard interface &lt;code&gt;wg0&lt;/code&gt; on both machines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wg-quick up wg0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that the tunnel is up and running on any of the machines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;wg show
interface: wg0
  public key: &lt;span class="nv"&gt;4P6scLYcHdgwU8tMkQYGjq6pu4KvrwKyKIg7JuP6E30&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
  private key: &lt;span class="o"&gt;(&lt;/span&gt;hidden&lt;span class="o"&gt;)&lt;/span&gt;
  listening port: 51820

peer: 0WDgQ+XkHkODI+3xT4APiI9GJS7MvjGH6wtk+W57TgM&lt;span class="o"&gt;=&lt;/span&gt;
  endpoint: 157.180.72.195:51820
  allowed ips: 10.200.2.0/24
  latest handshake: 12 seconds ago
  transfer: 124 B received, 624 B sent
  persistent keepalive: every 25 seconds
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you see the &lt;code&gt;latest handshake&lt;/code&gt; time updating, it means the tunnel is working correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Configure IP routing
&lt;/h2&gt;

&lt;p&gt;You've established the WireGuard tunnel, but packets between containers won't flow yet. You need to configure IP routing between the tunnel and the container networks.&lt;/p&gt;

&lt;p&gt;Docker daemon automatically enables IP forwarding in the kernel when it starts, so you don't need to manually configure &lt;code&gt;net.ipv4.ip_forward&lt;/code&gt; with &lt;code&gt;sysctl&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The challenge is that Docker blocks traffic between external interfaces and container networks by default for security reasons. You need to explicitly allow WireGuard traffic from &lt;code&gt;wg0&lt;/code&gt; interface to reach your containers via the &lt;code&gt;multi-host&lt;/code&gt; bridge interface. Docker uses iptables, so you can allow this traffic by adding a rule to the &lt;code&gt;FORWARD&lt;/code&gt; chain before any other Docker-managed rules that would drop it.&lt;/p&gt;

&lt;p&gt;Fortunately, Docker creates a special &lt;code&gt;DOCKER-USER&lt;/code&gt; chain exactly for this purpose. It's processed before other Docker-managed chains, allowing you to add custom rules that won't be overridden by Docker.&lt;/p&gt;

&lt;p&gt;To create the required iptables rule, you need to find the bridge interface name for the &lt;code&gt;multi-host&lt;/code&gt; network you created earlier. It's named &lt;code&gt;br-&amp;lt;short-network-id&amp;gt;&lt;/code&gt;, where &lt;code&gt;&amp;lt;short-network-id&amp;gt;&lt;/code&gt; is the first 12 characters of the network ID.&lt;/p&gt;

&lt;p&gt;Add the iptables rule to allow traffic from &lt;code&gt;wg0&lt;/code&gt; to &lt;code&gt;multi-host&lt;/code&gt; bridge on Machine 1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker network &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;multi-host
NETWORK ID     NAME         DRIVER    SCOPE
661096b2a5d9   multi-host   bridge    &lt;span class="nb"&gt;local&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;iptables &lt;span class="nt"&gt;-I&lt;/span&gt; DOCKER-USER &lt;span class="nt"&gt;-i&lt;/span&gt; wg0 &lt;span class="nt"&gt;-o&lt;/span&gt; br-661096b2a5d9 &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the iptables rule to allow traffic from &lt;code&gt;wg0&lt;/code&gt; to &lt;code&gt;multi-host&lt;/code&gt; bridge on Machine 2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker network &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;multi-host
NETWORK ID     NAME         DRIVER    SCOPE
48f808048e7c   multi-host   bridge    &lt;span class="nb"&gt;local&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;iptables &lt;span class="nt"&gt;-I&lt;/span&gt; DOCKER-USER &lt;span class="nt"&gt;-i&lt;/span&gt; wg0 &lt;span class="nt"&gt;-o&lt;/span&gt; br-48f808048e7c &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The traffic in the other direction (from &lt;code&gt;multi-host&lt;/code&gt; bridge to &lt;code&gt;wg0&lt;/code&gt;) is not blocked by Docker by default. But it still won't be able to make it through the tunnel. The reason is that Docker creates a &lt;code&gt;MASQUERADE&lt;/code&gt; rule in the &lt;code&gt;nat&lt;/code&gt; table for every bridge network with option &lt;a href="https://docs.docker.com/engine/network/drivers/bridge/#options" rel="noopener noreferrer"&gt;&lt;code&gt;com.docker.network.bridge.enable_ip_masquerade&lt;/code&gt;&lt;/a&gt; set to &lt;code&gt;true&lt;/code&gt; (which is the default). In my case, the rule looks like this on Machine 1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;POSTROUTING &lt;span class="nt"&gt;-s&lt;/span&gt; 10.200.1.0/24 &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; br-661096b2a5d9 &lt;span class="nt"&gt;-j&lt;/span&gt; MASQUERADE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This essentially configures NAT for all external traffic coming from containers which is necessary for allowing them to access the internet and other external networks. However, it equally applies to the traffic going through the &lt;code&gt;wg0&lt;/code&gt; interface. It tries to masquerade the source IP address of the packets with the IP address of the &lt;code&gt;wg0&lt;/code&gt; interface and fails because the &lt;code&gt;wg0&lt;/code&gt; interface doesn't have an IP. This results in the packets being &lt;a href="https://elixir.bootlin.com/linux/v6.15.5/source/net/netfilter/nf_nat_masquerade.c#L54-L58" rel="noopener noreferrer"&gt;dropped&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You could assign an IP address to &lt;code&gt;wg0&lt;/code&gt; but this would cause the following unwanted side effects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Containers from other Docker networks on the same machine could route through the tunnel to reach remote &lt;code&gt;multi-host&lt;/code&gt; containers, violating Docker's network isolation model.&lt;/li&gt;
&lt;li&gt;Remote containers would see all connections as coming from the &lt;code&gt;wg0&lt;/code&gt; IP instead of the actual container IPs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's instead add another rule to the &lt;code&gt;POSTROUTING&lt;/code&gt; chain in the &lt;code&gt;nat&lt;/code&gt; table to skip masquerading for the traffic from the &lt;code&gt;multi-host&lt;/code&gt; network going through the tunnel.&lt;/p&gt;

&lt;p&gt;Run on Machine 1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-I&lt;/span&gt; POSTROUTING &lt;span class="nt"&gt;-s&lt;/span&gt; 10.200.1.0/24 &lt;span class="nt"&gt;-o&lt;/span&gt; wg0 &lt;span class="nt"&gt;-j&lt;/span&gt; RETURN
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run on Machine 2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-I&lt;/span&gt; POSTROUTING &lt;span class="nt"&gt;-s&lt;/span&gt; 10.200.2.0/24 &lt;span class="nt"&gt;-o&lt;/span&gt; wg0 &lt;span class="nt"&gt;-j&lt;/span&gt; RETURN
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Testing
&lt;/h2&gt;

&lt;p&gt;Now you can finally run containers on both machines connected to their &lt;code&gt;multi-host&lt;/code&gt; networks and test that they can communicate.&lt;/p&gt;

&lt;p&gt;Run a &lt;a href="https://hub.docker.com/r/traefik/whoami" rel="noopener noreferrer"&gt;whoami&lt;/a&gt; container on Machine 2 which listens on port 80 and replies with the OS information and HTTP request that it receives:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="nb"&gt;whoami&lt;/span&gt; &lt;span class="nt"&gt;--network&lt;/span&gt; multi-host traefik/whoami
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get its IP address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker inspect &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}"&lt;/span&gt; &lt;span class="nb"&gt;whoami
&lt;/span&gt;10.200.2.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now fetch &lt;code&gt;http://10.200.2.2&lt;/code&gt; from inside a container on Machine 1.&lt;/p&gt;

&lt;p&gt;Drum roll, please! 🥁&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--network&lt;/span&gt; multi-host alpine/curl http://10.200.2.2
Hostname: bdb55fc9d9ae
IP: 127.0.0.1
IP: ::1
IP: 10.200.2.2
RemoteAddr: 10.200.1.2:37682
GET / HTTP/1.1
Host: 10.200.2.2
User-Agent: curl/8.14.1
Accept: &lt;span class="k"&gt;*&lt;/span&gt;/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yay, it works! The request came from the container &lt;code&gt;10.200.1.2&lt;/code&gt; on Machine 1 and was served by the container &lt;code&gt;10.200.2.2&lt;/code&gt; on Machine 2.&lt;/p&gt;

&lt;p&gt;You can ping remote containers or use any other network protocols to communicate with them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--network&lt;/span&gt; multi-host alpine:latest ping &lt;span class="nt"&gt;-c&lt;/span&gt; 3 10.200.2.2
PING 10.200.2.2 &lt;span class="o"&gt;(&lt;/span&gt;10.200.2.2&lt;span class="o"&gt;)&lt;/span&gt;: 56 data bytes
64 bytes from 10.200.2.2: &lt;span class="nb"&gt;seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;62 &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;301.294 ms
64 bytes from 10.200.2.2: &lt;span class="nb"&gt;seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;62 &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;297.191 ms
64 bytes from 10.200.2.2: &lt;span class="nb"&gt;seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;62 &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;297.285 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both hosts have IPs assigned to the &lt;code&gt;multi-host&lt;/code&gt; bridges, &lt;code&gt;10.200.1.1&lt;/code&gt; and &lt;code&gt;10.200.2.1&lt;/code&gt; respectively, which should also be reachable from the containers or hosts on both machines.&lt;/p&gt;

&lt;p&gt;You can see from the &lt;code&gt;ping&lt;/code&gt; command the latency is quite high (~300 ms) in my case because the packets have to travel from Australia to Finland and back. You should take this into account when planning to run latency-sensitive applications across machines in different regions. As my friend Sergey &lt;a href="https://x.com/megaserg/status/1857438834822090793" rel="noopener noreferrer"&gt;once said&lt;/a&gt;, "sucks to be limited by the speed of light tbh".&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Make the configuration persistent
&lt;/h2&gt;

&lt;p&gt;To ensure this setup survives reboots, you need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Persist iptables rules.&lt;/li&gt;
&lt;li&gt;Automatically start the WireGuard interface on boot.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Persisting iptables rules
&lt;/h3&gt;

&lt;p&gt;You can use the &lt;code&gt;iptables-persistent&lt;/code&gt; package to save and restore iptables rules on boot. But a more reliable way would be to use &lt;code&gt;PostUp&lt;/code&gt; and &lt;code&gt;PostDown&lt;/code&gt; options in the WireGuard configs to automatically configure iptables when WireGuard starts/stops.&lt;/p&gt;

&lt;p&gt;Append the following lines to the &lt;code&gt;[Interface]&lt;/code&gt; section in &lt;code&gt;/etc/wireguard/wg0.conf&lt;/code&gt;. Make sure to replace &lt;code&gt;&amp;lt;network-id&amp;gt;&lt;/code&gt; with your actual Docker network ID from Step 3. The &lt;code&gt;%i&lt;/code&gt; is replaced by WireGuard with the interface name (&lt;code&gt;wg0&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;On Machine 1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[Interface]&lt;/span&gt;
&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="py"&gt;PostUp&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;iptables -I DOCKER-USER -i %i -o br-&amp;lt;network-id&amp;gt; -j ACCEPT; iptables -t nat -I POSTROUTING -s 10.200.1.0/24 -o %i -j RETURN&lt;/span&gt;
&lt;span class="py"&gt;PostDown&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;iptables -D DOCKER-USER -i %i -o br-&amp;lt;network-id&amp;gt; -j ACCEPT; iptables -t nat -D POSTROUTING -s 10.200.1.0/24 -o %i -j RETURN&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On Machine 2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[Interface]&lt;/span&gt;
&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="py"&gt;PostUp&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;iptables -I DOCKER-USER -i %i -o br-&amp;lt;network-id&amp;gt; -j ACCEPT; iptables -t nat -I POSTROUTING -s 10.200.2.0/24 -o %i -j RETURN&lt;/span&gt;
&lt;span class="py"&gt;PostDown&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;iptables -D DOCKER-USER -i %i -o br-&amp;lt;network-id&amp;gt; -j ACCEPT; iptables -t nat -D POSTROUTING -s 10.200.2.0/24 -o %i -j RETURN&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Start WireGuard on boot
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;wireguard-tools&lt;/code&gt; package provides a convenient systemd service to manage WireGuard interfaces. Since our iptables rules should have priority over Docker's rules, WireGuard must start after Docker.&lt;/p&gt;

&lt;p&gt;Create a systemd drop-in configuration for this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/systemd/system/wg-quick@wg0.service.d/
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/systemd/system/wg-quick@wg0.service.d/docker-dependency.conf &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
[Unit]
After=docker.service
Requires=docker.service
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then enable the WireGuard service to start on boot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;wg-quick@wg0.service
systemctl daemon-reload
&lt;span class="c"&gt;# Verify the unit includes the drop-in configuration&lt;/span&gt;
systemctl &lt;span class="nb"&gt;cat &lt;/span&gt;wg-quick@wg0.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Scaling beyond two machines
&lt;/h2&gt;

&lt;p&gt;Adding a third machine means following the same steps as above on it and updating WireGuard configs on &lt;em&gt;all&lt;/em&gt; existing machines. Each machine needs a &lt;code&gt;[Peer]&lt;/code&gt; section for every other machine in the network. With 5 machines, that's 4 peer entries per config file or 20 peer configurations total that establish a full mesh topology.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3lm9zv1hbq2i8gliiss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3lm9zv1hbq2i8gliiss.png" alt="WireGuard full mesh" width="290" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  DNS resolution
&lt;/h3&gt;

&lt;p&gt;The main limitation of this setup is that containers can't find each other by name across machines. You need to use their IP addresses directly or implement a service discovery solution like Consul or CoreDNS.&lt;/p&gt;

&lt;p&gt;For small deployments, you can assign static IPs to containers and use those IPs in your app configuration. But service discovery is essential for larger and more dynamic deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  NAT traversal constraints
&lt;/h3&gt;

&lt;p&gt;For WireGuard connections to work, at least one machine in each pair must be publicly reachable. The connection fails if both machines are behind NAT. While solutions exist (STUN/TURN servers, UDP hole punching), they're beyond the scope of this guide.&lt;/p&gt;

&lt;p&gt;Common scenarios that work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Cloud VPS (public or private IP) ↔ Cloud VPS (public or private IP). Both can use private IPs only if they're in the same cloud provider's network&lt;/li&gt;
&lt;li&gt;✅ Homelab (behind NAT) ↔ Cloud VPS (public IP)&lt;/li&gt;
&lt;li&gt;✅ Homelab (private IP) ↔ Homelab (private IP on the same local network)&lt;/li&gt;
&lt;li&gt;❌ Homelab (behind NAT) ↔ Friend's homelab (behind NAT) — requires a relay server&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Automating with Uncloud
&lt;/h2&gt;

&lt;p&gt;As your setup grows, managing subnet allocation for Docker networks (ensuring each gets a unique range like &lt;code&gt;10.200.1.0/24&lt;/code&gt;, &lt;code&gt;10.200.2.0/24&lt;/code&gt;) and updating WireGuard configs manually may become tedious quickly.&lt;/p&gt;

&lt;p&gt;I built &lt;a href="https://github.com/psviderski/uncloud" rel="noopener noreferrer"&gt;Uncloud&lt;/a&gt;, an open source clustering and deployment tool for Docker, to handle all the heavy lifting automatically. You can get the same result and much more with just a few commands.&lt;/p&gt;

&lt;p&gt;Initialise a new cluster on your first machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uc machine init user@machine1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add more machines to the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uc machine add user@machine2
uc machine add user@machine3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what these commands do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create the &lt;code&gt;uncloud&lt;/code&gt; Docker network on each machine with unique subnets (&lt;code&gt;10.210.0.0/24&lt;/code&gt;, &lt;code&gt;10.210.1.0/24&lt;/code&gt;, etc.).&lt;/li&gt;
&lt;li&gt;Generate WireGuard key pairs and distribute public keys across machines.&lt;/li&gt;
&lt;li&gt;Start a full mesh WireGuard network.&lt;/li&gt;
&lt;li&gt;Configure iptables rules for container communication.&lt;/li&gt;
&lt;li&gt;Make everything persistent across reboots.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Beyond the network setup, you also get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-machine &lt;a href="https://docs.docker.com/reference/compose-file/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt; deployments with zero downtime.&lt;/li&gt;
&lt;li&gt;Built-in DNS server that resolves container IPs by their service names.&lt;/li&gt;
&lt;li&gt;Automatic HTTPS and reverse proxy configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check out the &lt;a href="https://uncloud.run/docs" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; for more information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alternative solutions
&lt;/h2&gt;

&lt;p&gt;Before settling on the WireGuard approach, I evaluated several alternatives. Note that I only considered lightweight solutions suitable for Docker. Kubernetes and its CNI ecosystem deserve a separate discussion.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Swarm overlay network
&lt;/h3&gt;

&lt;p&gt;Docker Swarm includes built-in &lt;a href="https://docs.docker.com/engine/network/drivers/overlay/" rel="noopener noreferrer"&gt;overlay networking&lt;/a&gt;. However, to use an overlay network, you need to run a &lt;a href="https://docs.docker.com/engine/swarm/" rel="noopener noreferrer"&gt;Swarm cluster&lt;/a&gt; on all machines. This introduces additional complexity:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cluster nodes must &lt;a href="https://docs.docker.com/engine/swarm/admin_guide/#maintain-the-quorum-of-managers" rel="noopener noreferrer"&gt;maintain the quorum&lt;/a&gt;. Losing quorum impacts the functionality of overlay networks.&lt;/li&gt;
&lt;li&gt;Ports 2377, 7946, and 4789 must be exposed to untrusted networks (if connecting machines over the internet) for cluster management, node communication, and VXLAN overlay traffic.&lt;/li&gt;
&lt;li&gt;VXLAN traffic is unencrypted by default, requiring additional &lt;a href="https://docs.docker.com/engine/swarm/swarm-tutorial/#open-protocols-and-ports-between-the-hosts" rel="noopener noreferrer"&gt;hardening&lt;/a&gt; with IPSec and firewalls.&lt;/li&gt;
&lt;li&gt;Every node must be publicly reachable. VXLAN fails if machines are behind NAT.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If these limitations are acceptable, an overlay network is a great option. Note that you can use an overlay network with regular containers without using any other Swarm features.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flannel
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/flannel-io/flannel" rel="noopener noreferrer"&gt;Flannel&lt;/a&gt; is battle-tested in Kubernetes but can also be used with Docker. It supports multiple &lt;a href="https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md" rel="noopener noreferrer"&gt;backends&lt;/a&gt; including VXLAN and WireGuard.&lt;/p&gt;

&lt;p&gt;The main caveat is that Flannel requires running etcd as the datastore for coordination. Depending on your availability requirements, you may need to set up an etcd cluster with multiple nodes. This is not a problem if you're already using Kubernetes. But if you're just running a few Docker hosts, it might seem like overkill.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tailscale
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://tailscale.com/" rel="noopener noreferrer"&gt;Tailscale&lt;/a&gt; makes WireGuard easy with automatic NAT traversal and key management, but it's not designed as a generic site-to-site VPN for connecting networks. Instead, it connects individual devices and provides identity-based access controls.&lt;/p&gt;

&lt;p&gt;The recommended approach for using &lt;a href="https://tailscale.com/kb/1282/docker" rel="noopener noreferrer"&gt;Tailscale with Docker&lt;/a&gt; is to connect each individual container to a Tailscale network. This means deploying an additional Tailscale container alongside every application container.&lt;/p&gt;

&lt;p&gt;Tailscale's &lt;a href="https://tailscale.com/kb/1019/subnets" rel="noopener noreferrer"&gt;subnet router&lt;/a&gt; feature might work to expose Docker networks similar to our setup, but I haven't tested this approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;That's it! Now you know how to securely connect Docker containers across multiple machines using WireGuard. The manual setup works great for a handful of machines that you don't need to change often, but configuration management becomes tedious as you scale.&lt;/p&gt;

&lt;p&gt;If you don't want to mess with manual configuration, consider automation tools like Uncloud or evaluate if you need a full orchestration platform.&lt;/p&gt;




&lt;p&gt;If you enjoyed this, dropping a star on &lt;a href="https://github.com/psviderski/uncloud" rel="noopener noreferrer"&gt;https://github.com/psviderski/uncloud&lt;/a&gt; really brightens my day and keeps me motivated to build and write more.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>wireguard</category>
      <category>networking</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
