What you'll learn
By the end of this section, you'll:
- Understand how Docker networking works and why it's important for multi-container apps.
 - Learn the difference between bridge, host, and overlay networks in Docker.
 - Know how to expose ports using 
-pso services can communicate across containers or expose endpoints to your host. - See how to connect multiple containers so they can communicate internally.
 - Complete a hands-on project where a React app communicates with a Node.js backend over a shared custom bridge network in Docker.
 
We'll start by understanding why Docker networking is important.
Why Docker networking is important in multi-container applications
When you're working on a project, it's rarely a single container doing all the work. You'll usually have a:
- backend API
 - frontend application
 - database
 - sometimes a message broker
 
... with all running as separate containers.
The key is that these containers need to communicate with each other over a network.
For example, if your frontend can't reach the backend, or your API can't access the database, the entire application breaks. That's where Docker networking comes in.
It enables containers to communicate reliably and securely, without exposing everything to the internet.
Okay, let me give you a common scenario.
Let's say you're building a React application that fetches data from a Node.js backend. During development, you might call the backend using "http://localhost:4000", right?
That works when both apps run directly on your local machine. But once they run in separate containers, localhost no longer refers to the same environment. The React container’s localhost is not the same as the backend’s.
Now you need a way for those containers to discover and communicate with each other.
So, what's the solution?
Docker solves this by creating a virtual network and allowing containers to discover each other by name, like an internal DNS system.
So if you name your backend container "backend", your frontend can make requests to "http://backend:4000".
Without any need for IPs or manual linking, it just works as long as both containers are on the same Docker network.
In this project, you'll see how this works in practice. You’ll:
- Create a shared custom network
 - Run both containers on it
 - Configure the frontend to communicate with the backend using its container name
 
This workflow is foundational for larger containerized systems and directly applies to more advanced tooling like Docker Compose or Kubernetes.
Since you now have a clear understanding of why containers need to communicate. In the next section, you'll learn how Docker makes that communication possible behind the scenes.
How Docker networking works
Now that you understand why containers need to communicate, it’s important to see how Docker enables that communication.
When Docker is installed, it creates a default network called bridge.
This bridge is a virtual network that Docker uses to connect containers on the same host. If you run containers without explicitly assigning them to a custom network, Docker attaches them to this default bridge network.
Every container connected to this bridge gets its own internal IP address. But more importantly, when containers are connected to the same network, Docker allows them to resolve each other by name.
This means instead of using localhost or an IP address, a container can communicate with another container simply by using its name.
For example, if a container is started with--name backend, any other container on the same network can reach it at: "http://backend"
This feature is what makes internal container communication seamless. You don’t need to hardcode IPs or expose every service to the outside world.
The 3 types of Docker networks
Now, let's break down the three main network types to know when working with Docker:
1. Bridge network (default for single-host setups)
This is the most commonly used network for local development. When multiple containers are attached to the same bridge network, they can communicate with each other using their container names.
In this project, a custom bridge network will be used. Creating a named network gives more control and ensures both the frontend and backend containers are communicating within a shared, isolated environment.
2. Host network
This mode removes network isolation between the container and the host. The container shares the host’s network stack. It is typically used when maximum network performance is needed or when the container must bind directly to host ports.
We won't use this in our project, but it's important to be aware of its use cases and limitations.
3. Overlay network
This one is for multi-host setups, like when you're running Docker Swarm or Kubernetes. It allows containers running on different physical machines to communicate over a secure virtual network.
It's not needed for single-machine setups, but essential when deploying distributed systems in production.
So, to recap:
For most development environments and projects running on a single host, the bridge network is the most practical choice. It provides container name resolution, clean isolation, and is simple to configure.
This is why the upcoming steps in the project will use a custom bridge network, which ensures that the frontend and backend containers can communicate securely and reliably.
Coming up next, I'll show you how to expose container ports with the -p flag so services running in containers can be accessed from the host machine or other tools. Let's walk through that now.
  
  
  Exposing container ports with -p
Now that you’ve seen how containers can communicate with each other on a Docker network, it’s also important to understand how your host machine (such as your browser, Postman, or terminal) can communicate with those containers.
By default, Docker containers are isolated. Even if a service is running correctly inside a container, it cannot be accessed from outside unless a port is explicitly exposed.
This is where the -p flag comes in.
When starting a container, the -p option maps a port inside the container to a port on the host machine. This allows external tools, like your browser, to access services running inside the container.
For example:
-p 3000:3000
This tells Docker to take port 3000 from inside the container and map it to port 3000 on your host machine.
So if your React app is running inside the container on port 3000, you can open your browser and visit "http://localhost:3000"
If you forget to use the -p flag, the container might still be running and responding internally, but you will not be able to access it from your host environment.
Where you'll find this mapping mostly important is for:
Frontend applications that need to be opened in a browser
APIs that should be tested using Postman, curl, or other tools
Any service that should be exposed to the outside world during development or testing
In our project, we'll expose both the backend (on port 4000) and the frontend (on port 3000) using
-p. This will allow us to view the React app in the browser and test the Node.js API externally.
Up next, I’ll walk you through how containers communicate internally without needing to expose their ports to the host at all. This is particularly useful when two services need to communicate entirely within Docker.
How containers communicate with each other internally
Earlier, you learned how to expose a container’s port to your host machine using the -p flag. That setup is useful when you want to access a service from your browser, testing tools like Postman, or terminal utilities like curl.
But what happens when two containers need to communicate internally, without routing through your host machine?
For example, say your React frontend container needs to fetch data from a Node.js backend container. Both are running inside Docker. In this case, you do not need to expose any ports with -p to enable communication between them.
This works because Docker automatically sets up internal networking for containers that are on the same network. Docker provides an internal DNS system that allows containers to resolve each other by name.
If your backend container is named backend, your React app can send a request like this:
http://backend:4000
There’s no need to use IP addresses or expose backend ports to the host. The container name acts as the hostname, and Docker handles the rest.
To enable this setup, both containers must be attached to the same Docker network. You can do this by creating a custom bridge network and passing the --network flag when running each container.
In this project, that’s exactly what we’ll do:
- Create a custom bridge network
 - Connect both containers to it
 - Configure the React app to communicate with the backend using the container name (backend) and the correct port (4000)
 
This internal communication model is how multi-container systems typically work. Services communicate over a private network without needing to expose every service to the outside.
Alright, now let’s move on to the project where we’ll put all of this into practice. You’re going to use a React frontend + Node.js backend running in separate containers, communicating over Docker’s internal network.
Let’s go.
Project: Connect a React frontend container to a Node.js backend container
You’ve now seen the theory behind Docker networking. Let’s put that knowledge into practice with a hands-on project.
In this section, you’ll build/clone a two-service application using Docker: a React frontend that fetches data from a Node.js backend. Both services will run in separate containers, and you’ll configure them to communicate over a shared Docker network.
Here’s what you’ll do:
- 
Containerize each application with a custom Dockerfile
- The React app will be built and served using Nginx
 - The backend will run on Node.js 22
 
 Create a custom Docker bridge network
Configure the frontend to communicate with the backend using the backend’s container name, not localhost
Verify that everything works in your browser and from inside the containers
If you don’t already have a project set up, you can clone these two minimal demo repositories to follow along:
- React frontend: https://github.com/d-emeni/react-demo
 - Node.js backend: https://github.com/d-emeni/node-api-demo
 
Let’s get started.
Step 1: Understand the project structure
Let’s start by walking through what each part of the application does and how they interact once containerized.
The React app (frontend) runs on port 3000 in development. It fetches a list of users from a backend API.
The Node.js backend listens on port 4000, serving a JSON response at the
/api/usersendpoint.
In a typical local development setup, the frontend would send requests like this:
http://localhost:4000/api/users
That works because both the frontend and backend are running directly on your machine.
However, once you move both apps into containers, that changes. Each container has its own isolated environment, including its own version of "localhost". So if the frontend container tries to send a request to "localhost:4000", it's actually trying to call itself, not the backend.
To solve this, we’ll:
Connect both containers to a shared Docker network
Update the frontend to communicate with the backend using the container name, like "http://backend:4000"
And in production (when the frontend is served via Nginx), we’ll configure it to call the backend using a relative path like
/api, which Nginx will proxy to the backend container
This approach allows the two services to communicate reliably within Docker, without hardcoding IP addresses or exposing unnecessary ports.
Step 2: Add Dockerfiles to both apps
To run your applications inside Docker containers, you need to define how each one should be built. That’s what a Dockerfile does, it’s a step-by-step recipe that tells Docker how to package your app into a runnable image.
Dockerfile for the React frontend
In the root of your React project (react-demo/), create a file called Dockerfile with the following content:
# Stage 1: Build the React app
FROM node:22-alpine AS builder
WORKDIR /app
COPY . .
RUN npm install && npm run build
# Stage 2: Serve the app with Nginx
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Let’s break this down:
Stage 1 (builder): Uses Node.js 22 to install dependencies and run the Vite build, which outputs the static files into the dist/ folder.
Stage 2 (Nginx): Copies the build output into the default Nginx web root and starts the Nginx server to serve the static files.
This multi-stage Dockerfile keeps your final image small and production-optimized. You're only shipping the compiled frontend and not the entire Node environment.
Make sure to also include an nginx.conf file in your project root. This ensures that API requests like /api/users are correctly forwarded to the backend container. You’ll add that later in the tutorial.
Dockerfile for the Node.js backend
In the root of your backend project (node-api-demo/), create a file named Dockerfile with the following content:
# Use Node.js 22 Alpine image
FROM node:22-alpine
# Set the working directory
WORKDIR /app
# Copy the backend code
COPY . .
# Install dependencies
RUN npm install
# Start the server
CMD ["node", "server.js"]
# Expose the backend port
EXPOSE 4000
This Dockerfile defines everything Docker needs to run your backend service:
Installs dependencies
Runs your server.js file using Node.js
Exposes port 4000 for incoming API requests
If you’d like a more detailed walkthrough of Dockerizing a Node.js backend, check out my step-by-step guide.
Step 3: Build Docker images
Now that both applications have Dockerfiles, it’s time to package them into Docker images. These images will serve as the blueprints for running your containers.
Open your terminal and run the following commands from the root of each project:
Make sure your Docker daemon is running before you begin. If you're not sure, follow this setup guide
# Inside react-demo/
docker build -t react-app .
# Inside node-api-demo/
docker build -t backend-api .
Let’s break that down:
The
-tflag assigns a name (or “tag”) to your image. In this case, we're naming themreact-appandbackend-api.The
.at the end tells Docker to use the current directory as the build context, where it will look for the Dockerfile and app code.
Once these build steps are complete, you’ll have two ready-to-run Docker images:
react-app— a production-ready build of your React frontend, served with Nginx.backend-api— your Node.js server listening on port 4000.
You’ll see output logs during the build process. Here's what the end of the build typically looks like for each app:
React frontend image build:
Node.js backend image build:
If you see something like this, it means your images were built successfully and are now ready to be run in containers.
Next up, we’ll create a Docker network to allow both containers to communicate.
Step 4: Create a custom Docker network
For the frontend and backend containers to communicate by name, they must be connected to the same Docker network.
Open your terminal (you can run this from any directory) and create a custom bridge network:
docker network create react-backend-net
This command sets up a new bridge network named react-backend-net. If successful, Docker will return a long alphanumeric string that represents the network ID:
You won’t need to interact with this ID directly. What matters is the name of the network (react-backend-net), because that’s what you’ll reference when running containers.
Once both containers are connected to this network, Docker will enable them to resolve each other by container name. For example, the frontend can reach the backend simply using "http://backend:4000" without any IP addresses or exposed ports.
If you get an error saying the network already exists, that means it was previously created. You can safely skip this step and continue with the next one.
Next, we’ll run the backend container and connect it to this custom network.
Step 5: Run the backend container
Start by running the Node.js backend as a container. This will ensure the API is running and ready to handle requests before we launch the React app.
Before running the command below, ensure the following:
You’re not already running the backend server locally on port 4000.
Port 4000 is free (not being used by another process).
The Docker daemon is running properly on your system.
Now, open your terminal and run:
docker run -d \
  --name backend \
  --network react-backend-net \
  -p 4000:4000 \
  backend-api
This command does the following:
--name backendassigns the container a name. Other containers on the same network (like the frontend) can refer to it using this name.--network react-backend-netattaches the container to the custom Docker network we created in Step 4.-p 4000:4000maps port 4000 inside the container to port 4000 on your host, so you can access the API via "http://localhost:4000".
If everything starts correctly, Docker will output a long container ID like this:
This confirms the backend container is running in the background.
Just in case you run into any errors & how to resolve them…
If you run the command and Docker throws an error, don’t worry. These are the two most common ones you might see during this step, and how to fix them quickly:
(1). Port already in use
Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:4000...
This means something (like your local server) is already using port 4000.
Solution:
Stop the process using the port (e.g., your locally running Node server), or
Change the host port in the Docker command:
-p 4001:4000
This maps port 4001 on your machine to port 4000 inside the container.
(2). Container name already in use
Error response from daemon: Conflict. The container name "/backend" is already in use...
Docker doesn’t allow duplicate container names. You can fix this in one of two ways:
- Option A: Remove the existing container:
 
docker rm -f backend
- Option B: Run the new container with a different name:
 
--name backend-v2
Once your backend container is running successfully, you’re ready to launch the frontend container and configure it to communicate with the backend through the Docker network.
Step 6: Configure the React frontend to communicate with the backend container
Before running the React container, we need to update the frontend code so it knows how to communicate with the backend service inside Docker.
During local development, you may have written a request like this:
fetch("http://localhost:4000/api/users")
That works when both apps run on your machine. However, once they’re in containers, this URL will no longer work, because each container has its own isolated environment. Inside the React container, localhost refers to itself, not the backend.
To resolve this, you have two important steps:
(1). Use a relative path in api.js
Update the API request to use a relative path instead of a hardcoded URL:
fetch("/api/users")
This ensures the React app can stay agnostic of the backend's full URL, letting us handle routing through Docker or Nginx.
In your src/services/api.js file, your updated code might look like:
const API_BASE_URL = import.meta.env.VITE_API_BASE_URL || "";
export async function fetchUsers() {
  const response = await fetch(`${API_BASE_URL}/api/users`);
  if (!response.ok) {
    throw new Error(`Failed to fetch users: ${response.status}`);
  }
  return response.json();
}
(2). Set VITE_API_BASE_URLin the .env file
To make this work inside Docker, you should set the environment variable in your React app’s .env file like this:
VITE_API_BASE_URL=
This means: use a relative path like /api (which Nginx will proxy internally to the backend container). In our container setup, the Nginx configuration ensures that requests to /api are forwarded to the backend.
Inside the container, this relies on Docker’s internal networking and DNS resolution. Since the backend container is named backend, Nginx knows how to forward requests to "http://backend:4000".
Once these updates are made:
Your React app will send API requests to /api/users
Nginx inside the container will forward them to "http://backend:4000/api/users"
The backend will respond, and the data will be rendered in your React UI
This setup keeps your frontend clean, avoids hardcoding environment-specific URLs, and works seamlessly inside Docker.
Next, we’ll run the React container and connect it to the backend via the shared Docker network.
Step 7: Run the React container
With your Docker image for the frontend built and the backend container already running, you're ready to launch the React app inside a container.
Run the following command:
docker run -d \
  --name frontend \
  --network react-backend-net \
  -p 3000:80 \
  react-app
Let’s break down what this does:
--name frontendassigns the container a name for internal communication and easier reference--network react-backend-netconnects the container to the same Docker network as the backend, enabling internal communication-p 3000:80maps port 80 inside the container (used by Nginx to serve the app) to port 3000 on your machine, so you can access it at "http://localhost:3000"
Once the container is running, visit:
http://localhost:3000
You should see your React app in the browser. If everything is configured correctly, it will fetch the user data from the backend container and display the list.
If you run into any issues, don’t worry. We'll cover debugging in the next step.
Step 8: Debug or verify container communication
If your React app displays an error like "Failed to fetch", it means the frontend was unable to reach the backend API. Here are several ways to diagnose and resolve the issue:
1. Check the backend logs
Run the following command to inspect the backend container:
docker logs backend
This will show whether the request from the frontend reached the backend, and whether the server responded successfully or encountered an error (e.g., route not found or internal server error).
2. Use browser developer tools (DevTools)
In your browser:
Open the Network tab (inside DevTools)
Refresh the page
Look for the request to:
   http://backend:4000/api/users
- Inspect the status code, response, and any error message in the preview or console.
 
This helps you determine if the request was blocked, returned a 404/500, or failed due to CORS or DNS issues.
3. Ping the backend container from inside the frontend container
You can enter the frontend container's shell like this:
docker exec -it frontend sh
Then run:
ping backend
If the containers are correctly connected to the same network, you'll see successful ping responses like:
PING backend (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: icmp_seq=0 ttl=64 time=0.102 ms
This confirms that the frontend can resolve and reach the backend container by name.
(Optional) Test the backend API from within the frontend container
Still inside the frontend shell, try:
wget http://backend:4000/api/users
or if curl is available:
curl http://backend:4000/api/users
This helps you verify that a valid HTTP response is returned from the backend endpoint.
Once you confirm the containers can communicate, but the React app still fails to fetch, check for:
typos in the API URL
missing environment variables
CORS issues in your backend (if applicable)
Step 9: Clean up
Before we wrap up this project, let’s clean up everything we created: the containers and the custom network.
This helps avoid conflicts when you’re working on future Docker projects, and keeps your environment tidy.
Follow the steps below to remove both containers and the network.
1. Remove the frontend container
We’ll start by stopping and removing the frontend container.
Run this from any terminal:
docker rm -f frontend
If it works correctly, Docker will stop and remove the container, and you’ll see something like this:
2. Remove the backend container
Next, remove the backend container:
docker rm -f backend
This will stop and delete the backend container. Here’s what it looks like when it works:
3. Remove the custom Docker network
Now let’s remove the network that connected the containers:
docker network rm react-backend-net
If successful, Docker will simply return the name of the network:
This confirms the network has been deleted.
4. Confirm everything is gone
To double-check that no containers are still running, run:
docker ps
You should see no active containers, just an empty table like this:
So what you just built (and why it’s useful)
You’ve just completed a hands-on Docker networking project using a frontend and backend app.
Let’s walk through what you did:
You containerized two apps: a React frontend and a Node.js backend
You created a custom Docker network so the containers could communicate
You updated the frontend to connect to the backend by container name, not localhost
You verified everything worked, using your browser, terminal, and Docker commands
By doing this step-by-step, you've learned how to:
Run separate services inside containers
Connect them on the same Docker network
Avoid common issues developers face when containers can’t reach each other
In the next tutorial, I’ll show you how to simplify this setup using a docker-compose.yml file, so you can launch everything with one command. Follow me on dev.to to get notified.
              






    
Top comments (0)