Containers do not have “networking” in the abstract sense.
They participate in Linux networking through isolation, indirection, and policy.
When a container sends a packet, it does not leave Docker. It leaves a network namespace, traverses a virtual Ethernet pair, crosses a bridge or routing boundary, and is transformed by netfilter rules before it ever reaches a wire.
Understanding this path explains nearly every networking behavior attributed to Docker.
Network Namespaces as the Isolation Boundary
Each container runs inside its own network namespace containing:
- Interfaces
- Routes
- ARP tables
- iptables chains
- Loopback device
Nothing inside the container is virtualized. The kernel enforces isolation by scoping visibility.
Docker’s responsibility is namespace construction and wiring — not packet delivery.
The Default Bridge Is a Linux Bridge
The default Docker network is backed by a Linux bridge named docker0.
When a container is created:
- A veth pair is allocated
- One endpoint enters the container namespace as
eth0 - The peer endpoint attaches to
docker0 - An IP is assigned from the bridge subnet
- NAT rules are installed for outbound traffic
The bridge provides Layer 2 adjacency. Routing and NAT occur outside the container.
This model trades simplicity for control and remains Docker’s default for a reason.
Port Publishing Is Address Translation, Not Exposure
Publishing a port does not modify the container. It installs DNAT rules on the host that rewrite incoming traffic.
Traffic flow:
- Host interface receives packet
- iptables PREROUTING rewrites destination
- Packet forwarded to container IP
- Return traffic SNATed back
This explains why:
- Containers do not bind host ports
- Port collisions are resolved at the host layer
- Network performance differs from host mode
Port publishing is policy, not plumbing.
Network Modes Are Policy Choices
| Mode | Description | Use Case | Trade-off |
|---|---|---|---|
| Bridge | Isolated namespace, NATed egress, explicit ingress | Default, safest | NAT overhead |
| Host | No namespace, no translation | Max performance | No isolation |
| None | Namespace with only loopback | Batch jobs, hardened workloads | No connectivity |
| Macvlan | Real MAC address, appears as physical device | VM-like networking | Bypasses iptables |
| Overlay | Encapsulation for multi-host | Swarm, Kubernetes | Encapsulation latency |
libnetwork Is Control, Not Data Plane
libnetwork programs the kernel:
- Allocates IPs
- Selects drivers
- Creates endpoints
- Configures routing and firewall rules
It does not forward packets. The kernel always does.
Multi-Container Communication Is Name Resolution
User-defined bridge networks include an embedded DNS service.
Containers discover each other by name — Docker resolves names to IPs at runtime.
No static environment variables needed.
Debugging Means Leaving the Container
Most Docker networking failures occur outside the container namespace.
Useful commands:
-
ip link show type veth— veth pairs -
brctl showorip link show docker0— bridge membership -
ip route(host) vsdocker exec <id> ip route— routing -
iptables -t nat -L -v -n— NAT chains -
nsenter --net=/proc/<pid>/ns/net— enter namespace
Container logs rarely explain network issues. The host almost always does.
Docker is often blamed. The kernel is usually guilty.
Performance and Security Tradeoffs
- Bridge: NAT overhead
- Host: No isolation
- Macvlan: Bypasses iptables
- Overlay: Encapsulation latency
Docker networking prioritizes containment over concealment. Security comes from explicit policy.
Summary
Docker networking is a composition of kernel primitives:
- Network namespaces for isolation
- veth pairs for connectivity
- Bridges/routes for topology
- netfilter for policy
- libnetwork for orchestration
Once internalized, Docker networking becomes predictable.
Top comments (0)