DEV Community

Cover image for Beginner’s Guide — Build a Node.js Microservices Stack with Docker, MongoDB & RabbitMQ
Jack Pritom Soren
Jack Pritom Soren

Posted on

Beginner’s Guide — Build a Node.js Microservices Stack with Docker, MongoDB & RabbitMQ

Want to learn how to put together a small microservices system using Node.js services, MongoDB for persistence, and RabbitMQ as a message broker for notifications? Perfect — this guide walks you through everything: the full code, what each line does, why and when to use RabbitMQ, how to run it with Docker Compose.


Project overview (what you’re building)

A simple microservice system with:

  • user-service — CRUD for users (MongoDB users DB). HTTP API on port 3000.
  • task-service — CRUD for tasks (MongoDB tasks DB). When a task is created it publishes a message to RabbitMQ queue task_created. HTTP API on port 3001.
  • notification-service — Listens to the task_created queue and logs notifications (simulates sending emails/push). No HTTP needed (runs on port 3002 but only prints).
  • mongo — MongoDB container, data persisted to a named Docker volume.
  • rabbitmq — RabbitMQ (with management UI) to broker messages between services.

You control all with a single docker-compose.yml.


Full docker-compose.yml

services:
  mongo:
    image: mongodb/mongodb-community-server:latest
    container_name: mongo
    ports:
      - "27017:27017"
    volumes:
      - mongo_data:/data/db

  rabbitmq:
    image: rabbitmq:3-management
    container_name: rabbitmq
    ports:
      - "5672:5672"
      - "15672:15672"

  user-service:
    build: ./user-service
    container_name: user-service
    ports:
      - "3000:3000"
    depends_on:
      - mongo

  task-service:
    build: ./task-service
    container_name: task-service
    ports:
      - "3001:3001"
    depends_on:
      - mongo
      - rabbitmq

  notification-service:
    build: ./notification-service
    container_name: notification-service
    ports:
      - "3002:3002"
    depends_on:
      - mongo
      - rabbitmq

volumes:
  mongo_data:
Enter fullscreen mode Exit fullscreen mode

Explanation (line by line — docker-compose.yml)

  • services: — top level key, declares containers to run.
  • mongo: — defines a service named mongo.

    • image: mongodb/mongodb-community-server:latest — uses official MongoDB community server image (latest tag). This will pull from Docker Hub.
    • container_name: mongo — sets the container’s name to mongo for easier docker commands.
    • ports: - "27017:27017" — exposes MongoDB default port 27017 to the host (host:container). Useful for debugging with mongo CLI / GUI clients.
    • volumes: - mongo_data:/data/db — persists DB files into a named volume mongo_data so data survives container restarts.
  • rabbitmq: — defines RabbitMQ service.

    • image: rabbitmq:3-management — RabbitMQ image that includes the management UI plugin (accessible on 15672).
    • container_name: rabbitmq — container name.
    • ports:
    • "5672:5672" — RabbitMQ AMQP port (used by producers/consumers).
    • "15672:15672" — management web UI (open in browser to monitor queues).
  • user-service: — our Node.js user service.

    • build: ./user-service — build from user-service folder and its Dockerfile.
    • container_name: user-service — container name.
    • ports: - "3000:3000" — expose port 3000.
    • depends_on: - mongo — Docker Compose will start mongo before user-service. Important: depends_on only controls start order; it does not wait for MongoDB to be fully ready — your code must handle retries or wait.
  • task-service: — Node service for tasks.

    • depends_on: - mongo - rabbitmq — start mongo and rabbitmq first, but again readiness should be handled by the app (this code does a retry for RabbitMQ).
  • notification-service: — consumer that listens to RabbitMQ and prints notifications.

    • depends_on: for both mongo and rabbitmq.
  • volumes: mongo_data: — declares a named Docker volume that was referenced earlier.


user-service

user-service/Dockerfile

FROM node:22
WORKDIR /app 
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node","index.js" ]
Enter fullscreen mode Exit fullscreen mode

Explanation (line by line):

  • FROM node:22 — base image containing Node.js v22. This includes Node and npm. Pick a specific version in production to avoid surprises.
  • WORKDIR /app — sets working directory inside container to /app.
  • COPY package*.json ./ — copies package.json and package-lock.json if present (install dependencies first).
  • RUN npm install — installs dependencies inside the image so we don't run install on container start.
  • COPY . . — copy application code into container.
  • EXPOSE 3000 — documents that container listens on port 3000 (for humans and some tools).
  • CMD [ "node","index.js" ] — default command when container starts; run index.js.

user-service/index.js

const express = require("express");
const mongoose = require("mongoose");
const bodyParser = require("body-parser");

const app = express();
const port = 3000;

app.use(bodyParser.json());

mongoose
  .connect("mongodb://mongo:27017/users")
  .then(() => {
    console.log("Connected to MongoDB");
  })
  .catch((error) => {
    console.error("Error connecting to MongoDB:", error);
  });

const UserSchema = new mongoose.Schema({
  name: String,
  email: String,
});

const User = mongoose.model("User", UserSchema);

app.get("/users", async (req, res) => {
  const users = await User.find();
  res.json(users);
});

app.post("/users", async (req, res) => {
  const { name, email } = req.body;
  try {
    const user = new User({ name, email });
    await user.save();
    res.status(201).json(user);
  } catch (error) {
    console.error("Error Saving: ", error);
    res.status(500).json({ error: "Internal Server Error" });
  }
});

app.get("/", (req, res) => {
  res.send("Hello World!");
});

app.listen(port, () => {
  console.log(`User service listening on port ${port}`);
});
Enter fullscreen mode Exit fullscreen mode

Line-by-line explanation — index.js:

  • const express = require("express"); — import Express, a minimal web framework.
  • const mongoose = require("mongoose"); — import Mongoose, an ODM to talk to MongoDB.
  • const bodyParser = require("body-parser"); — parse JSON request bodies (Express >=4.16 has express.json(), but here body-parser is used).
  • const app = express(); — create Express app.
  • const port = 3000; — port where service will listen.
  • app.use(bodyParser.json()); — middleware to parse incoming JSON payloads into req.body.
  • mongoose.connect("mongodb://mongo:27017/users") — connect to MongoDB using hostname mongo (Docker Compose DNS) and DB users.

    • .then(() => console.log(...)) — logs success.
    • .catch(...) — logs failure. Note: if Mongo isn't ready, this will fail once; in production you likely want a retry loop (or use mongoose.connect() options and retry logic).
  • const UserSchema = new mongoose.Schema({ name: String, email: String }); — defines a simple schema for users with name and email.

  • const User = mongoose.model("User", UserSchema); — creates a Mongoose model.

  • app.get("/users", async (req, res) => { ... }) — route to return all users as JSON.

  • app.post("/users", async (req, res) => { ... }) — route to create a new user from req.body. On success responds with 201 and the created user.

  • app.get("/", (req, res) => res.send("Hello World!")); — root route for quick check.

  • app.listen(port, () => console.log(...)); — start the server when app starts.


user-service/package.json

{
  "name": "user-service",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "description": "",
  "dependencies": {
    "body-parser": "^2.2.0",
    "express": "^5.1.0",
    "mongoose": "^8.19.3"
  }
}
Enter fullscreen mode Exit fullscreen mode

Notes: lists dependencies. In modern Node, you can use express.json() instead of body-parser, but this works fine.


task-service

task-service/Dockerfile

Same pattern as user-service:

FROM node:22
WORKDIR /app 
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD [ "node","index.js" ]
Enter fullscreen mode Exit fullscreen mode

(Explained earlier — just exposes port 3001 and runs index.js.)


task-service/index.js

const express = require("express");
const mongoose = require("mongoose");
const bodyParser = require("body-parser");
const amqp = require("amqplib");

const app = express();
const port = 3001;

app.use(bodyParser.json());

mongoose
  .connect("mongodb://mongo:27017/tasks")
  .then(() => {
    console.log("Connected to MongoDB");
  })
  .catch((error) => {
    console.error("Error connecting to MongoDB:", error);
  });

const TaskSchema = new mongoose.Schema({
  title: String,
  description: String,
  userId: String,
  createdAt: { type: Date, default: Date.now },
});

const Task = mongoose.model("Task", TaskSchema);

let channel, connection;

async function connectRabbitMQWithRetry(retries = 5, delay = 3000) {
  while (retries) {
    try {
      connection = await amqp.connect("amqp://rabbitmq");
      channel = await connection.createChannel();
      await channel.assertQueue("task_created");
      console.log("Connected to RabbitMQ");
      return;
    } catch (error) {
      console.error("RabbitMQ Connection Error: ", error);
      retries--;
      console.error("Retrying again: ", retries);
      await new Promise((res) => setTimeout(res, delay));
    }
  }
}

app.get("/tasks", async (req, res) => {
  const tasks = await Task.find();
  res.json(tasks);
});

app.post("/tasks", async (req, res) => {
  const { title, description, userId } = req.body;
  try {
    const task = new Task({ title, description, userId });
    await task.save();

    const message = {
      taskId: task._id,
      userId,
      title,
    };

    if (!channel) {
      return res.status(503).json({ error: "RabbitMQ not connected" });
    }

    channel.sendToQueue("task_created", Buffer.from(JSON.stringify(message)));

    res.status(201).json(task);
  } catch (error) {
    console.error("Error Saving: ", error);
    res.status(500).json({ error: "Internal Server Error" });
  }
});

app.listen(port, () => {
  console.log(`Task service listening on port ${port}`);
  connectRabbitMQWithRetry();
});
Enter fullscreen mode Exit fullscreen mode

Line-by-line (key points):

  • Imports similar to user-service, plus amqplib to talk to RabbitMQ using AMQP protocol.
  • mongoose.connect("mongodb://mongo:27017/tasks") — connects to tasks database.
  • TaskSchema includes title, description, userId, and createdAt.
  • let channel, connection; — will hold RabbitMQ connection and channel (channels are logical connections inside an AMQP connection).
  • connectRabbitMQWithRetry(retries = 5, delay = 3000) — helper that tries to connect up to 5 times, waiting delay ms between attempts. This mitigates startup race conditions where RabbitMQ isn't ready when the container starts.

    • connection = await amqp.connect("amqp://rabbitmq") — connects to host rabbitmq (Compose DNS).
    • channel = await connection.createChannel() — creates a channel.
    • await channel.assertQueue("task_created") — asserts the queue exists (creates if not).
  • POST /tasks — creates a new Task, persists it, then:

    • Builds message object with taskId, userId, title.
    • If channel is falsy (RabbitMQ not connected), returns 503 Service Unavailable.
    • channel.sendToQueue("task_created", Buffer.from(JSON.stringify(message))); — publishes the message to queue (note: default exchange, queue name used as routing key).
  • app.listen(...); connectRabbitMQWithRetry(); — on startup the service starts listening and then attempts to connect to RabbitMQ.

Why this design? Worker/consumer decoupling — the service that creates tasks does not directly call notification system; instead it publishes an event and moves on. This enables scalability and resilience.


task-service/package.json

{
  "name": "task-service",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "dependencies": {
    "amqplib": "^0.10.9",
    "body-parser": "^2.2.0",
    "express": "^5.1.0",
    "mongoose": "^8.19.3"
  }
}
Enter fullscreen mode Exit fullscreen mode

notification-service

notification-service/Dockerfile

FROM node:22
WORKDIR /app 
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3002
CMD [ "node","index.js" ]
Enter fullscreen mode Exit fullscreen mode

Same pattern. (Expose 3002 but this service doesn't necessarily need HTTP; okay for parity.)


notification-service/index.js

const amqp = require("amqplib");

async function start() {
  try {
    connection = await amqp.connect("amqp://rabbitmq");
    channel = await connection.createChannel();
    await channel.assertQueue("task_created");
    console.log("Notification Service is listening to messages");

    channel.consume("task_created", (msg) => {
      const taskData = JSON.parse(msg.content.toString());
      console.log("Notification: NEW TASK: ", taskData.title);
      console.log("Notification: NEW TASK: ", taskData);
      channel.ack(msg);
    });
  } catch (error) {
    console.error("RabbitMQ Connection Error: ", error.message);
  }
}

start();
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • Imports amqplib.
  • start() attempts to connect to RabbitMQ at amqp://rabbitmq.
  • Creates a channel and assertQueue("task_created") to ensure the queue exists.
  • channel.consume("task_created", (msg) => { ... }) — registers a consumer callback for messages arriving on the queue.

    • Parses the message and logs it (your notification logic would go here: send email, push, etc.)
    • channel.ack(msg) — acknowledges the message so RabbitMQ removes it from the queue.
  • start(); — run the consumer.

Note: This service has no retry loop. If RabbitMQ isn't ready at startup, it will fail once and quit — consider adding the same retry logic as task-service or a supervisor to restart the container. Docker Compose will try to restart depending on restart policy.


notification-service/package.json

{
  "name": "notification-service",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "dependencies": {
    "amqplib": "^0.10.9",
    "body-parser": "^2.2.0",
    "express": "^5.1.0",
    "mongoose": "^8.19.3"
  }
}
Enter fullscreen mode Exit fullscreen mode

How to run locally (commands)

From the project root (where docker-compose.yml sits):

  1. Build and run in detached mode:
docker-compose up --build -d
Enter fullscreen mode Exit fullscreen mode
  1. See logs:
docker-compose logs -f
Enter fullscreen mode Exit fullscreen mode
  1. Open RabbitMQ management UI: http://localhost:15672/
  • Default username/password for the base image are guest/guest when accessed from the same host. (If running remote, change credentials.)
  • There you can inspect the task_created queue.
  1. Test the API:
  • Create a user:
curl -X POST http://localhost:3000/users -H "Content-Type: application/json" -d '{"name":"Alice","email":"alice@example.com"}'
Enter fullscreen mode Exit fullscreen mode
  • Create a task for that user:
curl -X POST http://localhost:3001/tasks -H "Content-Type: application/json" -d '{"title":"Buy milk","description":"From store","userId":"<userId>"}'
Enter fullscreen mode Exit fullscreen mode

When you POST /tasks, the task-service will save the task and publish to RabbitMQ — the notification-service will consume the message and print the notification in its logs. Check the notification-service logs:

docker-compose logs -f notification-service
# or
docker logs -f notification-service
Enter fullscreen mode Exit fullscreen mode

Why RabbitMQ? When and why use a message broker

What is a message broker?

A message broker (RabbitMQ, Kafka, Redis streams, etc.) enables asynchronous communication between services. Instead of calling the notification service directly, task-service posts a message describing an event to the broker. The notification service subscribes and reacts to these events.

Why use a broker (benefits)

  • Decoupling — Producer and consumer do not need to be running at the same time or know about each other’s HTTP API. They only agree on the message format and queue/exchange name.
  • Resilience — If the consumer is down, messages wait in the broker queue for it to come back.
  • Scalability — Multiple consumers can process messages in parallel (consumers scale horizontally).
  • Retry & DLQ — You can configure retries, dead-letter queues for failed messages.
  • Buffering/Load leveling — If a spike in tasks occurs, the queue buffers requests while consumers process them at their pace.
  • Flexibility — Add new consumers (e.g., analytics, audit logs) that subscribe to the same events without modifying producers.

When to use a message broker

  • When operations triggered by an event are not required to complete synchronously (e.g., sending email after creating task).
  • When you expect bursts of events and want to smooth processing.
  • When multiple independent systems should react to the same event (fan-out).
  • When you want to achieve loose coupling between services.

When not to use a message broker

  • For simple CRUD where synchronous response is required (e.g., login). Overusing a broker adds complexity.
  • If your operations must be atomic across services without compensation or two-phase commit (distributed transactions are hard).
  • Small projects / prototypes where REST calls are simpler — but often using a broker from beginning can pay off later.

Quick troubleshooting tips

  • If task-service says RabbitMQ not connected:

    • Check RabbitMQ status: docker-compose logs rabbitmq and http://localhost:15672 (guest/guest).
    • Ensure task-service attempted to reconnect (see console logs).
  • If Mongo connection fails:

    • Check docker-compose logs mongo and ensure container started. Use docker exec -it mongo bash + mongosh to test.
  • If messages aren't delivered:

    • Open RabbitMQ management UI, look at queue lengths, bindings, and consumers.
  • If containers don’t start:

    • docker-compose ps to inspect states; docker-compose logs <service> for errors.

Example flow — what happens when you create a task

  1. Client sends POST /tasks to task-service with { title, description, userId }.
  2. task-service saves task to MongoDB tasks collection.
  3. task-service publishes a JSON message to the task_created queue via RabbitMQ.
  4. notification-service (consumer) receives the message and executes code to send a notification (currently logs to console).
  5. Consumer acks the message so RabbitMQ removes it.

This split of responsibilities makes the architecture robust and easily extensible.


Final remarks

This example is a hands-on beginner-friendly microservice demo:

  • It shows how to persist data (MongoDB), expose HTTP APIs (Express), and decouple services using RabbitMQ.
  • The code is intentionally simple so you can focus on the architecture and messaging patterns.

Follow me on : Github Linkedin Threads Youtube Channel

Top comments (0)