The "It Works On My Machine" Illusion
You've built a beautiful, decoupled microservices architecture. Your Node/Express API runs smoothly on port 3000, and your React frontend fetches data perfectly from http://localhost:3000/api. You containerize both services, spin them up with Docker Compose, and suddenly the frontend crashes with a Connection Refused error.
What happened? You fell into the localhost trap.
Understanding Docker's Internal Network
When running services locally without Docker, localhost refers to your host machine. Both your frontend and backend share this environment. However, when you containerize these services, Docker creates an isolated virtual network. Inside a container, localhost no longer refers to your laptop; it refers to the container itself.
When your React container tries to fetch from localhost:3000, it is looking inside its own isolated environment, where the Node API does not exist.
*The Solution: DNS Resolution via Compose
Docker Compose automatically sets up a custom bridge network and provides internal DNS resolution using the service names defined in your docker-compose.yml.
Let's look at a standard compose file:
version: '3.8'
services:
api-service:
build: ./backend
ports:
- "3000:3000"
web-client:
build: ./frontend
ports:
- "8080:80"
depends_on:
- api-service
To fix the connection refused error, you must stop hardcoding localhost. If your backend needs to communicate with another containerized service from the server-side, it must use the exact service name defined in the YAML file as the hostname.
// BAD: Fails inside a Docker container
const response = await fetch('http://localhost:3000/api/users');
// GOOD: Utilizes Docker's internal DNS resolver
const response = await fetch('http://api-service:3000/api/users');
By respecting Docker's internal network topography, we ensure our microservices can communicate seamlessly, regardless of the host machine they are deployed on.
Top comments (0)