Containerizing full-stack apps reveals a harsh reality—your React frontend is the awkward middle child that can't speak to its Docker siblings.
Containerizing a full-stack application is a rite of passage for every DevOps-leaning engineer. You successfully get your Node.js backend talking to PostgreSQL, your Python ML service crunching data, and Redis caching everything in between.
But then, the "Middle Child" enters the room: The Frontend.
Despite being part of the docker-compose.yml family, the frontend often feels isolated, unable to speak the same internal language as its Docker siblings. Here's why it happens and how to solve it.
The Family Reunion That Excluded Frontend
I recently orchestrated a multi-service e-commerce app with:
Backend (Node.js/Express) ✅ Connected
ML Service (Python/Flask) ✅ Connected
PostgreSQL Database ✅ Connected
Redis Cache ✅ Connected
React Frontend ❌ Left outside in the cold
Inside the Docker bridge network, life was beautiful. My backend could reach the ML service simply by using the service name:
// Inside the backend container
const response = await fetch('http://ml-service:5000/recommendations/42');
// ✅ Works perfectly!
The Docker DNS handles the heavy lifting. Services on the same bridge network are family.
# docker-compose.yml - Happy family networking
services:
backend:
networks: [app-network]
ml-service:
networks: [app-network]
postgres:
networks: [app-network]
redis:
networks: [app-network]
networks:
app-network:
driver: bridge
Frontend Got Kicked Out (The Harsh Reality)
Then I tried the same thing from my React frontend:
// React frontend trying to join the family...
fetch('http://backend:8080/api/products')
.then(res => res.json())
.catch(err => console.error('Sibling rivalry:', err));
Browser Console:
net::ERR_NAME_NOT_RESOLVED
Why Did This Fail?
The "Middle Child Syndrome" stems from a fundamental misunderstanding of where the code actually runs:
- Backend code runs inside the Docker container → uses Docker's internal DNS
- Frontend code is delivered by Docker, but executes in the user's browser
The browser lives on your host machine (your device), not inside the Docker bridge network. Your device's DNS has no idea what http://backend is. The Docker network is a private club your browser doesn't have membership for.
Browser ←─── HTTP ───► ??? ──── Docker Network ───► Services
│ (backend, ml, db, redis)
│
└── Can't reach Docker DNS directly ❌
The Solutions: Pick Your Poison
1. Expose Everything (Security Nightmare ⚠️)
The quickest fix is to expose every service to your host machine:
services:
backend:
ports:
- "8080:8080" # Now public on localhost
ml-service:
ports:
- "5000:5000" # Exposed to internet
React calls http://localhost:8080, also calls http://localhost:5000 ✅ Works, but you've just exposed all your internal services to the entire internet. In production, this is a massive security "no-go."
2. Backend Proxy (Secure & Simple)
Route frontend requests through your backend, which can access Docker DNS:
services:
backend:
ports:
- "8080:8080" # Single exposed port
ml-service:
# No ports exposed - internal only
Backend proxies ML requests:
// backend/server.js
app.get('/api/ml/:path*', async (req, res) => {
const mlUrl = `http://ml-service:5000/${req.params.path}`;
const response = await fetch(mlUrl);
res.json(await response.json());
});
React calls /api/ml/recommendations → Backend proxies to ml-service:5000 ✅ Secure + elegant.
3. Nginx Reverse Proxy (Production Ready)
This is the most architecturally sound approach. Put a "Gatekeeper" in front of your family:
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
frontend:
# No ports exposed
backend:
# No ports exposed
ml-service:
# No ports exposed
nginx.conf:
server {
listen 80;
location /api/ {
proxy_pass http://backend:8080/;
}
location /ml/ {
proxy_pass http://ml-service:5000/;
}
location / {
proxy_pass http://frontend:3000/;
}
}
Now your frontend only talks to one place: the Nginx port. Nginx lives inside the Docker network and handles routing to all siblings.
Browser ←─── HTTP ───► Nginx (port 80) ──── Docker Network ───► Services ✅
The Root Cause: Network Architecture Mismatch
Docker bridge networks solve:
- ✅ Service-to-service communication
- ✅ Container DNS resolution
- ❌ Browser-to-container (without port mapping or proxy)
The fix requires understanding execution context:
| Code Location | Runs Where? | Can Use Docker DNS? |
|---|---|---|
backend/server.js |
Inside container | ✅ Yes |
ml-service/app.py |
Inside container | ✅ Yes |
frontend/src/App.jsx |
In browser | ❌ No |
Lessons Learned (The Hard Way)
Execution Context is King - Always ask: "Where is this code actually running?" If it's a
.jsxfile, it runs in the browser, not Docker.One Ingress Point - Avoid "Swiss Cheese" security. Expose one port (80/443) and route everything internally.
Environment Variables Save Lives:
const API_URL = import.meta.env.VITE_API_URL || 'http://localhost:8080';
- CORS is Your Friend - Configure properly on all services that Nginx proxies:
app.use(cors({ origin: process.env.ALLOWED_ORIGINS }));
Complete Working Example
version: '3.8'
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on: [frontend, backend]
frontend:
build: ./frontend
# No ports - accessed via nginx
backend:
build: ./backend
environment:
DB_HOST: postgres
REDIS_URL: redis://redis:6379
# No ports - accessed via nginx
ml-service:
build: ./ml-service
# Internal only
postgres:
image: postgres:15
environment:
POSTGRES_DB: shopmicro
redis:
image: redis:alpine
networks:
default:
driver: bridge
The frontend middle child finally found its place - behind a proxy, secure, and talking to all its Docker siblings through proper networking architecture.
What's Next?
In my next post, I'll share how I took this ShopMicro architecture and scaled it into a Kubernetes cluster with Ingress controllers—where the networking gets even more "fun."
Top comments (0)