DEV Community

Cover image for Understanding How Containers Communicate in Docker and Kubernetes
Mordecai
Mordecai

Posted on

Understanding How Containers Communicate in Docker and Kubernetes

A beginner-friendly guide to Docker networking


Why This Matters

When you run an application in Docker, it doesn't automatically know how to reach other services. A container is isolated by default — it has its own network namespace, its own IP address, and its own view of the world. For two services to talk, you have to explicitly connect them.
We would be exploring possible scenarios for communication among containers.

This is something I learned while building SwiftDeploy. My Go API and Nginx were in separate containers and I didn't have full understanding of ow they communicated with each other.


Scenario 1: Two Containers Talking to Each Other

This is the most common scenario — an API and a database, or a frontend and a backend.

The wrong way is to use localhost. If your API tries to connect to localhost:5432 for PostgreSQL, it won't work. Inside a container, localhost refers to the container itself — not your host machine, not another container.

The right way is to use a Docker network. When two containers join the same network, they can reach each other by service name.

networks:
  myapp-net:
    driver: bridge

services:
  api:
    image: my-api
    networks:
      - myapp-net

  database:
    image: postgres
    networks:
      - myapp-net
Enter fullscreen mode Exit fullscreen mode

Now the API can connect to the database using database:5432 — Docker's internal DNS resolves the service name to the container's IP automatically.

In SwiftDeploy, Nginx reaches the API using api:3000 — not localhost:3000. That's why it works.

How it works under the hood:
Docker creates a virtual bridge network. Every container on that network gets an internal IP (like 172.18.0.2). Docker runs an internal DNS server that maps service names to these IPs. When the API says "connect to database", Docker's DNS resolves it to 172.18.0.3 or whatever IP the database got.


Scenario 2: Container Talking to the Host Machine

Sometimes a container needs to reach something running directly on your laptop — like a local development server or a database running outside Docker.

You can't use localhost from inside a container to reach the host. Instead use:

  • On Mac/Windows: host.docker.internal — Docker provides this hostname automatically
  • On Linux: 172.17.0.1 — the default Docker bridge gateway IP
# Inside a container on Linux
db = connect("172.17.0.1:5432")

# Inside a container on Mac/Windows
db = connect("host.docker.internal:5432")
Enter fullscreen mode Exit fullscreen mode

Alternatively, use --network host when running the container — this removes the network isolation entirely and the container shares the host's network stack. localhost works again but you lose isolation.

docker run --network host my-app
Enter fullscreen mode Exit fullscreen mode

Scenario 3: Two Different Applications on the Same Machine (No Docker)

When two regular applications run on the same machine — no containers — they communicate through localhost and ports.

Application A listens on port 8000:

app.run(host="0.0.0.0", port=8000)
Enter fullscreen mode Exit fullscreen mode

Application B connects to it:

response = requests.get("http://localhost:8000/api")
Enter fullscreen mode Exit fullscreen mode

The operating system routes the traffic internally — it never leaves the machine. This is fast but means both apps must be on the same machine.


Scenario 4: One Container, One Regular Application

This is the reverse proxy pattern — exactly what SwiftDeploy uses.

Nginx runs in a container. The API runs as a regular process on the host. How does Nginx reach the API?

Option 1 — Port mapping:
Map the API's host port into the container:

# API runs on host port 3000
# Nginx container uses host.docker.internal:3000 to reach it
Enter fullscreen mode Exit fullscreen mode

Option 2 — Host network mode:
Run Nginx with --network host. Now it can use localhost:3000 directly.

In SwiftDeploy both the API and Nginx run in containers on the same Docker network — so they use service name discovery instead. But the pattern above is common in development setups.


Scenario 5: Kubernetes — How Pods Communicate

In Kubernetes, containers run inside pods. Communication works at two levels:

Within a pod — containers share localhost:
If two containers are in the same pod they share a network namespace. They communicate on localhost just like two processes on the same machine.

# Both containers in this pod share localhost
spec:
  containers:
    - name: api
      ports:
        - containerPort: 8000
    - name: sidecar
      # can reach api at localhost:8000
Enter fullscreen mode Exit fullscreen mode

Between pods — use Services:
Pods get dynamic IPs that change when they restart. You never hardcode a pod IP. Instead you create a Service — a stable DNS name that routes to whatever pods match a label selector.

apiVersion: v1
kind: Service
metadata:
  name: api-service
spec:
  selector:
    app: api          # routes to pods with this label
  ports:
    - port: 80
      targetPort: 8000
Enter fullscreen mode Exit fullscreen mode

Now any pod in the cluster can reach the API at api-service:80 — Kubernetes DNS resolves it to the right pod IP automatically. Even if the pod restarts and gets a new IP, the Service name stays the same.

ClusterIP vs NodePort vs LoadBalancer:

  • ClusterIP — only accessible inside the cluster (like Docker's internal network)
  • NodePort — exposes the service on every node's IP at a specific port
  • LoadBalancer — provisions a cloud load balancer with a public IP

Summary Table

Scenario How they communicate Key tool
Container ↔ Container Service name on shared network Docker network
Container → Host host.docker.internal or 172.17.0.1 Docker bridge gateway
App ↔ App (no Docker) localhost:port OS network stack
Container ↔ App Port mapping or host network ports: in compose
Pod ↔ Pod (Kubernetes) Service DNS name Kubernetes Service
Pod ↔ Pod (same pod) localhost Shared network namespace

What I Learned Building SwiftDeploy

When setting up SwiftDeploy, since both Nginx and the Go API were running in separate containers, Nginx used Docker service discovery (api:3000) rather than localhost to communicate with the API:

# Wrong — localhost doesn't reach another container
proxy_pass http://localhost:3000;

# Right — use the service name
proxy_pass http://api:3000;
Enter fullscreen mode Exit fullscreen mode

How SwiftDeploy Was Structured

SwiftDeploy used multiple containers:

Client
   ↓
Nginx Container
   ↓
API Container

Enter fullscreen mode Exit fullscreen mode

Later, I added:

OPA container
observability and metrics
policy evaluation

All of these containers needed controlled communication.

The important design decision was:

only Nginx was publicly exposed
internal services stayed inside the Docker network

That separation was intentional for both architecture and security reasons.

One word change. That's how important Docker networking is to understand. Once I put both containers on the same named network and used the service name, everything worked.

The mental model that helped me most: each container is like a separate computer. To connect two computers you need a network. Docker networks are that network, and service names are like hostnames.


Read my SwiftDeploy project writeup here: https://dev.to/mordecai_amehson/swiftdeploy-a-tool-that-writes-its-own-infrastructure-170d


Top comments (0)