DEV Community

Teguh Coding
Teguh Coding

Posted on

Docker Networking Demystified: How Containers Actually Talk to Each Other

Most developers treat Docker networking like a black box. You type docker run, the container starts, things work (or they don't), and you spend 45 minutes Googling why your app can't reach the database.

I've been there. After debugging one too many "connection refused" errors in containers that should be talking to each other, I decided to actually understand how Docker networking works under the hood. This post is what I wish I had read back then.

The Mental Model: Containers Are Not VMs

Before diving in, one shift in thinking matters: containers share the host kernel. They're not isolated machines — they're isolated processes. Docker networking is about creating logical boundaries and channels between those processes.

When Docker installs, it creates a virtual network bridge called docker0 on your host. Think of it like a virtual switch that containers plug into. Each container gets a virtual ethernet interface (veth pair), one end inside the container, one end connected to docker0.

You can see this right now:

ip addr show docker0
# Look for the 172.17.0.1 range — that's Docker's default subnet
Enter fullscreen mode Exit fullscreen mode

The Four Network Drivers

Docker ships with four built-in network drivers. Understanding when to use each one saves a lot of pain.

1. Bridge (Default)

This is what you get if you don't specify anything. Containers on the same bridge network can talk to each other. Containers on different bridge networks cannot — unless you explicitly connect them.

# Create a custom bridge network
docker network create my-app-network

# Run containers on that network
docker run -d --name api --network my-app-network my-api-image
docker run -d --name db --network my-app-network postgres:15
Enter fullscreen mode Exit fullscreen mode

Now api can reach db by hostname — just postgres://db:5432/mydb. Docker's embedded DNS handles the name resolution automatically on custom bridge networks.

Important: The default docker0 bridge does NOT support DNS-based hostname resolution between containers. That's the #1 footgun for beginners. Always create a custom bridge network.

2. Host

The container shares the host's network stack entirely. No isolation, no port mapping needed — the container's port 3000 IS the host's port 3000.

docker run --network host nginx
# nginx is now accessible on the host's port 80 directly
Enter fullscreen mode Exit fullscreen mode

Use this when performance is critical and you're on Linux. Avoid on Mac/Windows where Docker runs inside a VM anyway — the semantics get weird.

3. None

Completely disables networking. The container gets a loopback interface only.

docker run --network none my-batch-job
Enter fullscreen mode Exit fullscreen mode

Perfect for security-sensitive batch processing that should never touch the network.

4. Overlay

For multi-host networking in Docker Swarm. Containers on different physical machines can communicate as if they're on the same network. Uses VXLAN encapsulation under the hood.

We'll skip the deep dive here — overlay is a Swarm-specific topic worth its own post.

Docker Compose: Networking Done Right

Here's where 90% of day-to-day Docker networking happens. Docker Compose automatically creates a custom bridge network for your stack and wires up all services. This is the happy path.

# docker-compose.yml
version: '3.9'

services:
  api:
    build: ./api
    ports:
      - "3000:3000"
    environment:
      DB_HOST: db
      REDIS_HOST: cache
    depends_on:
      - db
      - cache

  db:
    image: postgres:15
    environment:
      POSTGRES_PASSWORD: secret
    volumes:
      - pgdata:/var/lib/postgresql/data

  cache:
    image: redis:7-alpine

volumes:
  pgdata:
Enter fullscreen mode Exit fullscreen mode

In this stack, api can reach db at db:5432 and cache at cache:6379. No magic — it's Docker's DNS using service names.

Multiple Networks in Compose

What if you want isolation within your stack? Maybe your frontend should reach the API but NOT the database directly.

version: '3.9'

services:
  frontend:
    image: nginx
    networks:
      - frontend-net

  api:
    build: ./api
    networks:
      - frontend-net
      - backend-net

  db:
    image: postgres:15
    networks:
      - backend-net

networks:
  frontend-net:
  backend-net:
Enter fullscreen mode Exit fullscreen mode

Now frontend and api share a network. api and db share a separate network. frontend has zero path to db. Clean security segmentation, zero extra config.

Port Mapping: What Actually Happens

When you do -p 8080:3000, you're telling Docker:

  • Listen on host port 8080
  • Forward traffic to container port 3000

Docker uses iptables rules under the hood (on Linux) to route that traffic through NAT. You can inspect this:

sudo iptables -t nat -L DOCKER --line-numbers
Enter fullscreen mode Exit fullscreen mode

You'll see DNAT rules for each port mapping.

A few patterns worth knowing:

# Bind only to localhost (more secure — not exposed externally)
docker run -p 127.0.0.1:5432:5432 postgres

# Let Docker pick a random host port
docker run -p 3000 my-app

# Map all exposed ports automatically
docker run -P my-app
Enter fullscreen mode Exit fullscreen mode

For internal services (databases, caches), consider NOT publishing ports at all. If they're on the same Docker network, other containers can reach them without exposing them to the host.

Debugging Network Issues

When something's not connecting, here's the toolkit:

# Inspect a network
docker network inspect my-app-network

# See which containers are connected
docker network inspect my-app-network --format '{{json .Containers}}' | python3 -m json.tool

# Test connectivity from inside a container
docker exec -it api ping db
docker exec -it api curl http://db:8080/health

# Drop into a container with networking tools
docker run --rm --network my-app-network nicolaka/netshoot nslookup db
docker run --rm --network my-app-network nicolaka/netshoot traceroute db
Enter fullscreen mode Exit fullscreen mode

The nicolaka/netshoot image is a lifesaver — it's a container packed with every networking diagnostic tool you'd ever want: curl, nslookup, dig, netstat, ss, tcpdump, iperf3, and more.

A Real-World Example: API + Database + Reverse Proxy

Putting it all together with a realistic three-tier setup:

version: '3.9'

services:
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    networks:
      - public-net

  api:
    build: ./api
    environment:
      NODE_ENV: production
      DB_URL: postgresql://app_user:secret@db:5432/appdb
    networks:
      - public-net
      - private-net
    # No ports exposed — only reachable via nginx

  db:
    image: postgres:15-alpine
    environment:
      POSTGRES_USER: app_user
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: appdb
    volumes:
      - pgdata:/var/lib/postgresql/data
    networks:
      - private-net
    # No ports exposed — only reachable by api

networks:
  public-net:
  private-net:
    internal: true  # This network has NO external internet access

volumes:
  pgdata:
Enter fullscreen mode Exit fullscreen mode

Note the internal: true on the private network — containers on that network cannot reach the internet at all, only each other. That's a solid security posture for a database tier.

Key Takeaways

  • Always create custom bridge networks — never rely on the default docker0 bridge for inter-container communication
  • Docker Compose handles 90% of networking automatically — service names become hostnames
  • Use multiple networks within a Compose stack to enforce security boundaries
  • Don't publish ports for services that only need to talk to other containers
  • nicolaka/netshoot is your debugging best friend
  • internal: true networks block outbound internet access entirely

Docker networking clicked for me the moment I stopped thinking of containers as mini-VMs and started thinking of them as processes with shared networking abstractions. Once that mental model locks in, debugging becomes a lot less mysterious.

What networking issue has tripped you up the most? Drop it in the comments — I'm curious how many people have hit the default bridge DNS problem.

Top comments (0)