DEV Community

Cover image for Docker Networking Demystified: From Bridge to Overlay with Examples
PH Saurav
PH Saurav

Posted on • Originally published at Medium

Docker Networking Demystified: From Bridge to Overlay with Examples

Containerization transformed everything from software development to deployment. Docker sits at the center of this transformation. Although Docker is so widely used, its implementation is surrounded by confusion and mystery.
In this article, we will try to demystify Docker networking by exploring key concepts with clear, hands-on examples. We'll break down how these networks function and why they matter.

Key Concepts:

The Foundation:

Although many draw parallels between Docker and Virtual Machines, they miss the foundational architectural difference. The building block of Docker is much different from its main focus on isolation rather than full emulation. All processes run on the same host OS, isolated through Linux Namespaces and managed by cgroups. Namespaces provide the boundaries, and cgroups manage the resource allocation. For networking, Docker leverages Network Namespaces, which will be our deep dive today. We'll explore the fascinating world of cgroups in a future discussion.

Network NameSpaces:

Linux namespaces isolate system resources, and the network namespace is no exception—it creates a completely independent network stack. Each namespace owns its own routing tables, firewall rules,
network interfaces, and even virtual devices, making every container believe it has the entire network to itself.

Linux Bridge:

A bridge connects devices at Layer 2 of the OSI model. The connection is made using network interfaces and physical MAC addresses by extending the broadcast domain.
A Linux bridge is the virtual equivalent. It creates an internal Layer 2 bridge that links multiple network namespaces, letting their interfaces communicate as if they were plugged into the same physical bridge.

Quick Reference: Core Networking Elements

eth0 (Virtual Ethernet) – The primary network interface on most Linux systems. Containers and host reach the outside world through this interface.

lo (loopback) – The loopback interface (127.0.0.1/localhost). Traffic sent here stays inside the same namespace. This allows processes on the same host to communicate using networking APIs (using Port), but the data never leaves the system physically.

MAC Address – Layer-2 hardware address (e.g., 02:42:ac:11:00:02) used by bridges and switches to deliver frames to the correct interface within the same local segment.

IP Address – Layer-3 identifier (e.g., 172.17.0.2) that lets namespaces locate and talk to each other across networks.

Environment Setup:

Practice cements understanding. To follow and try out examples given below, you will just need a Linux machine, physical or VM with Docker installed on it. I am using Ubuntu 20.04 VM running Docker version 28.3.2.

Start by listing every network interface on the host:

ip link list
Enter fullscreen mode Exit fullscreen mode

You should see something similar to this:
Hosts Network Interface List

The main interfaces that concern us are the loopback interface(lo) and Primary ethernet interface(eth0). There is another interesting interface docker0 that we will discuss in detail later when discussing bridge driver.

Docker Networking

The networking in Docker is managed by some pluggable drivers. Mainly, there are six of them. We’ll walk through each driver, then peek under the hood to see how they map to network namespaces in your environment.

1. None Network Driver

This completely isolates the container from any outside access. What I mean by that is let's see what happens when we choose none driver with --network none flag.

Let's start a container with an alpine image where the network driver is none and start a shell in it with this command:

sudo docker run -it --rm --network none alpine sh
Enter fullscreen mode Exit fullscreen mode

Now run the same command we used previously to list network interfaces on host:

ip link list
Enter fullscreen mode Exit fullscreen mode
  • You will see something like this:

None Network driver output

Notice the container only exposes the lo (loopback) interface—eth0 is absent. Without eth0, the container has no path to the outside world; lo alone provides an isolated, localhost-only network that applications inside can use for internal communication.

None Network Diagram

Exit the container:

exit
Enter fullscreen mode Exit fullscreen mode

WHY none: It's useful when you don't want your container to have any external connectivity in a fully isolated network most probably for security reasons.

2. Host Network Driver

The host driver removes network isolation. Containers share the host's network stack, including IP addresses, ports, and interfaces.

To see host network driver in action, let's start a similar image in interactive shell mode with --network host and list network interfaces:

sudo docker run -it --rm --network host alpine sh
ip link list
Enter fullscreen mode Exit fullscreen mode

You will notice that all the interfaces are exactly same as the host's interface.
Host Network Driver Ouput

Host Network Driver Diagram

Exit the container:

exit
Enter fullscreen mode Exit fullscreen mode

WHY host: It's useful when you don't want any isolation and want to optimize maximum bare-metal performance.

3. Bridge Network Driver

Finally, we are here with the bridge networking driver that we all use most of the time. As I mentioned above, a bridge is a device that connects devices or network segments at layer 2 and expands the network.
In this network mode, the containers are connected to a virtual Linux bridge. Containers connected to it receive their own IP addresses and can talk to one another freely while still sharing the host’s connection to the outside world.

3.1 Default Bridge Network

When someone creates a container without mentioning any network driver type, it creates a default bridge network. Now the question is, I didn't create any bridge. So, where is my container getting connected to? 🤔

Did you remember when we ran ip link list on host? There was an interesting interface called docker0. I told you we will come back to this. This mysterious interface is our default bridge.

Docker creates it by default and any container by default connects to this default bridge.

Now, to test it out, let's first check out the current status of the bridge. It should be empty if there is no container.

ip link show master docker0
Enter fullscreen mode Exit fullscreen mode

Let's create two containers in the background and check their connection:

sudo docker run -d --name container1 alpine sleep 3600
sudo docker run -d --name container2 alpine sleep 3600

sudo docker exec container1 ip addr # Take the eth0 ip address
sudo docker exec container2 ping -c 2 <container1-ip-address>
Enter fullscreen mode Exit fullscreen mode

Connection between container

So, here we can see that one container can ping the other container, so they are interconnected.

If we now again take a look at our default bridge:

ip link show master docker0
Enter fullscreen mode Exit fullscreen mode

Inside docker0 bridge

We will be able to see that there are now two connections from two containers in our docker0 bridge.

Let's clean up the containers:

sudo docker stop container1 container2
sudo docker rm container1 container2
Enter fullscreen mode Exit fullscreen mode

3.2 Custom Bridge Network

Instead of using the default bridge given by Docker. We can create our custom bridge:

sudo docker network create my-custom-net 
Enter fullscreen mode Exit fullscreen mode

Now, if we check the interface list of the host, we will see that there is a new entry. If we check inside it, we will find it empty. This is our new custom bridge.

Custom Bridge Output

Let's create two new containers under this network connected to this bridge and check the connection between them:

sudo docker run -d --name container1 --network my-custom-net alpine sleep 3600
sudo docker run -d --name container2 --network my-custom-net alpine sleep 3600

sudo docker exec container1 ping -c 2 container2
Enter fullscreen mode Exit fullscreen mode

Wait! With default bridge, we needed to get the IP address of container1 to check it from container2. What is going on with custom bridge? We are pinging container2 directly by name. Will this work with default bridge??

The answer is No. The custom bridge has some advantages over the default bridge. That's why Docker in its docs prefers using user-defined bridges instead of the default bridge. Here are some of the major advantages custom bridge gives over default bridge:

  1. User-defined name for ease of management (e.g., my-network)
  2. Automatic DNS resolution by container name. So we can use the container name instead of its IP.
  3. Better isolation by grouping containers into different network rather than in a single default network.
  4. Full control over IP range, gateway, etc.

** For most of the use case these 3 types of drivers are sufficient **
Let's clean up the containers:

sudo docker stop container1 container2
sudo docker rm container1 container2
sudo docker network rm my-custom-net
Enter fullscreen mode Exit fullscreen mode

IPvlan Network Driver:

IPvlan creates virtual interfaces that share the host's MAC address but have different IP addresses. Perfect for scenarios where you need multiple IPs on the same interface. The containers attached directly to host.

It offers two flavors:

• L2 mode – host interface behaves like a switch (Layer 2)
• L3 mode – host interface behaves like a router (Layer 3)

IPvlan Diagram

The switch-vs-router debate is a rabbit hole for another day🐰. The key point is you pick the mode that matches your network design.

Let's see how it interacts with the system by trying the L2 mode.
First, we need to know in which gateway the host is on

ip route
Enter fullscreen mode Exit fullscreen mode

So, to create an L2 IPvlan in this network:

sudo docker network create -d ipvlan \
  --subnet=192.168.110.0/24 \
  --gateway=192.168.110.1 \
  -o ipvlan_mode=l2 \
  -o parent=eth0 \
  ipvlan-l2-net
Enter fullscreen mode Exit fullscreen mode

Now, if we create a container in this network and check its network interface:

sudo docker run -it --rm --network ipvlan-l2-net alpine sh
ip addr show eth0
Enter fullscreen mode Exit fullscreen mode

We can see that the MAC address of the interface is the same as the host's, but the IP addresses are different.

Let's clean up:

sudo docker network rm ipvlan-l2-net
Enter fullscreen mode Exit fullscreen mode

WHY IPvlan: Use IPvlan when you need bare-metal network performance (no bridge overhead) or when you must place
containers directly on the same Layer-2 segment as the host. This can bypass the need of NAT/Port mapping to connect to outside network all together.

MACvlan Network Driver:

Macvlan creates sub-interfaces with unique MAC addresses which makes containers appear as physical devices on your network. So it can talk to everything else on the network.

MACvlan Diagrm

MACvlan offers two modes:

  1. Bridge mode – every container shares the same flat Layer-2 network; all MAC addresses live in one
    broadcast domain.

  2. VLAN mode – traffic is tagged with 802.1Q VLAN IDs, slicing the broadcast domain into isolated
    VLANs while still using the same physical interface.

Let's try out MacVlan bridge mode. Get the subnet & gateway similar to explained in IPvlan and create a macvlan network:

sudo docker network create -d macvlan \
  --subnet=192.168.110.0/24 \
  --gateway=192.168.110.1 \
  -o parent=eth0 \
  macvlan-bridge-net
Enter fullscreen mode Exit fullscreen mode

Run two container in this network using macvlan-bridge-net:

sudo docker run -d --rm --network macvlan-bridge-net --name container1 --ip 192.168.110.201 alpine sleep 3600
sudo docker run -d --rm --network macvlan-bridge-net --name container2 --ip 192.168.110.202 alpine sleep 3600
Enter fullscreen mode Exit fullscreen mode

Now ping from any device other than the host. The host won't be able to ping these IP as the host karnel is bypassing its own stack for these addresses.

ping -c 2 192.168.110.201
ping -c 2 192.168.110.201
Enter fullscreen mode Exit fullscreen mode

When I try to ping the container from a physical device in my network, it results in success. So these containers are now acting like a device on my network with their own IP and MAC addresses.

MACvlan Output

Let's clean up:

sudo docker stop container1 container2
sudo docker network rm macvlan-bridge-net
Enter fullscreen mode Exit fullscreen mode

WHY MACvlan: When you want containers to act like separate physical machines on the same LAN—each with its own MAC and IP—so they’re reachable directly by any host without NAT or port-mapping. The container will act like a standalone server.

Overlay Network Driver:

Till now, all the networking has been concerned with a single host. What if we have multiple hosts? Now the game is moving towards networking in Kubernetes.
But Docker has a solution for the multi-host problem, its Overlay Network. This is implemented using Docker swarm.

If we look at the diagram, we can see there is an extra VXLAN connection connecting the host. Now, what is Vxlan?

Vxlan: VXLAN is an overlay protocol that wraps Layer-2 Ethernet frames inside Layer-3 UDP packets, letting you stitch separate Layer-3 networks into one big, flat Layer-2 domain—so containers on different hosts think they’re plugged into the same switch.

Now, to play with the overlay network, we need at least 2 hosts. If you are using VMs, just clone it and create another one, and let's go along. If you have only one host, have faith in my screenshots😇.

Create an overlay network on one host change <manager-ip> to its ip, which will be the manager:

sudo docker swarm init --advertise-addr <manager-ip>
sudo docker network create -d overlay --attachable demo-overlay
Enter fullscreen mode Exit fullscreen mode

Running the advertise command will output a join command. Now, run that join command on other nodes to join them to this overlay network as worker.

Now we have a overlay network where 2 hosts are connected, create one container on the manager host:

sudo docker run -d --rm --name container1 --network demo-overlay alpine sleep 3600
Enter fullscreen mode Exit fullscreen mode

Crete another container on the worker node:

sudo docker run -d --rm --name container2 --network demo-overlay alpine sleep 3600
Enter fullscreen mode Exit fullscreen mode

Now run this ping test from the manager:

sudo docker exec container1 ping -c 2 container2
Enter fullscreen mode Exit fullscreen mode

Success! Think of it, these two containers are in totally different VMs, but they are connected to each other. This is an Overlay network.

Let's clean up:

  1. On the worker:
sudo docker stop container2
sudo docker swarm leave --force
Enter fullscreen mode Exit fullscreen mode
  1. On the manager:
sudo docker stop container1
sudo docker network rm demo-overlay
sudo docker swarm leave --force
Enter fullscreen mode Exit fullscreen mode

WHY Overlay: Overlay network comes in when you want to go with multi-host deployment with something like Docker Swarm service and want your containers to communicate across hosts physical or virtual boundaries.

Wrapping Up

Mission accomplished😰 so it’s done! We’ve covered all six networking drivers Docker offers, giving you the flexibility to tailor container connectivity to nearly any use case. From fully isolated sandboxes to multi-host overlays. I hope you’ve tried out the examples and seen for yourself how Linux networking makes all of these configurations possible. Another key piece of the puzzle is cgroups. Stay tuned we’ll dive into that concept another day. Till then, happy dockering!🐳

Top comments (0)