DEV Community

Cover image for NestJS Microservices Monorepo - Setup, Testing and Containerization
Mykhailo Toporkov 🇺🇦
Mykhailo Toporkov 🇺🇦

Posted on

NestJS Microservices Monorepo - Setup, Testing and Containerization

Yo! Today I’m going to demonstrate how to set up a NestJS monorepo with two REST microservices — Users and Posts — each using its own database through Prisma ORM.

For inter-service communication, we’ll use Kafka, taking advantage of the features provided by @nestjs/microservices.

I’ve also included an example of a GitHub Action workflow for testing the monorepo — it runs tests only for the services that were changed in your commits.

If you’re interested in a specific topic, feel free to use the navigation below:


Monorepo Setup

First things first — to create a monorepo, we need a regular Nest project. I’ll create it like this:

nest new users
Enter fullscreen mode Exit fullscreen mode

Now we can convert it into a monorepo by generating a new app inside it, like this:

cd users && nest generate app posts
Enter fullscreen mode Exit fullscreen mode

Also I will add some shared libs like common and kafka:

nest generate library common && nest generate library kafka
Enter fullscreen mode Exit fullscreen mode

All these commands and other documentation about NestJS monorepos, libraries, and CLI commands can be found in the official NestJS documentation. What’s important for us is to get a structure like the one shown below:

📂 apps
 | --- 📂 users
 |
 | --- 📂 posts
 |
📂 libs
 | --- 📂 common
 |
 | --- 📂 kafka
 |
etc.
Enter fullscreen mode Exit fullscreen mode

I’ll skip the database setup and other boring details, but what’s really important to mention is that I prefer each microservice to have its own .env file at the root of the service. The same goes for the Dockerfile and docker-compose.yml. Since I’m using Prisma as my ORM of choice, the generated files will also be located in the root of each microservice folder.

To summarize, the structure will look like this:

📂 apps
 | --- 📂 users
 |      | --- 📂 generated/prisma (excluded by gitignore by default)
 |      | --- 📂 prisma
 |      | --- 📂 test
 |      | --- 📂 src
 |      | --- .env
 |      | --- Dockerfile
 |      | --- docker-compose.yml
 |
etc.
Enter fullscreen mode Exit fullscreen mode

Now, regarding communication between microservices, I’ll be using Kafka. Here’s my setup in libs/kafka:

import { ConfigService } from '@nestjs/config';
import { MicroserviceOptions, Transport } from '@nestjs/microservices';

export const createKafkaMicroserviceOptions = (configService: ConfigService): MicroserviceOptions => ({
  transport: Transport.KAFKA,
  options: {
    client: {
      clientId: configService.get('APP_NAME') ?? 'default-client',
      brokers: [configService.get('KAFKA_URL') ?? 'localhost:9092'],
    },
    consumer: {
      groupId: `${configService.get('APP_NAME') ?? 'default'}-consumer`,
    },
  },
});
Enter fullscreen mode Exit fullscreen mode
import { ClientKafka } from '@nestjs/microservices';
import { UsersEvents, PostsEvents } from '@libs/kafka/messages';
import { Injectable, OnModuleInit, Inject } from '@nestjs/common';

type KafkaEvents = UsersEvents & PostsEvents;

@Injectable()
export class KafkaService implements OnModuleInit {
  constructor(@Inject('KAFKA_CLIENT') private readonly kafkaClient: ClientKafka) {}

  async onModuleInit() {
    await this.kafkaClient.connect();
  }

  emit<Topic extends keyof KafkaEvents>(topic: Topic, message: KafkaEvents[Topic]) {
    return this.kafkaClient.emit(topic, message);
  }

  send<Topic extends keyof KafkaEvents>(topic: Topic, message: KafkaEvents[Topic]) {
    return this.kafkaClient.send<Topic, KafkaEvents[Topic]>(topic, message);
  }
}
Enter fullscreen mode Exit fullscreen mode
import { Global, Module } from '@nestjs/common';
import { KafkaService } from '@libs/kafka/kafka.service';
import { ConfigModule, ConfigService } from '@nestjs/config';
import { ClientsModule, Transport } from '@nestjs/microservices';

@Global()
@Module({
  imports: [
    ConfigModule,
    ClientsModule.registerAsync([
      {
        name: 'KAFKA_CLIENT',
        imports: [ConfigModule],
        inject: [ConfigService],
        useFactory: (configService: ConfigService) => {
          const appName = configService.get<string>('APP_NAME');
          const kafkaUrl = configService.get<string>('KAFKA_URL');

          return {
            transport: Transport.KAFKA,
            options: {
              client: {
                clientId: appName ?? 'default-client',
                brokers: kafkaUrl ? [kafkaUrl] : ['localhost:9092'],
              },
              consumer: {
                groupId: `${appName ?? 'default'}-consumer`,
              },
            },
          };
        },
      },
    ]),
  ],
  providers: [KafkaService],
  exports: [KafkaService],
})
export class KafkaModule {}
Enter fullscreen mode Exit fullscreen mode

The usage of all this is pretty straightforward: the KafkaModule should be imported into the AppModule of your microservice to enable using the KafkaService inside other services.

The createKafkaMicroserviceOptions, on the other hand, is needed to utilize @nestjs/microservices, which provides many decorators for working comfortably with event-based communication, such as the @MessagePattern decorator:

import { ConfigService } from '@nestjs/config';
import { AppModule } from '@users-micros/app.module';
import { NestFactory, Reflector } from '@nestjs/core';
import { SwaggerModule, DocumentBuilder } from '@nestjs/swagger';
import { createKafkaMicroserviceOptions } from '@libs/kafka/kafka.config';
import { ClassSerializerInterceptor, ValidationPipe } from '@nestjs/common';
import { DatabaseExceptionFilter } from '@users-micros/database/databse.filter';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  const configService = app.get(ConfigService);

  const kafkaMicroserviceOptions = createKafkaMicroserviceOptions(configService);

  app.connectMicroservice(kafkaMicroserviceOptions);

  app.useGlobalPipes(new ValidationPipe({ whitelist: true, transform: true }));

  app.useGlobalInterceptors(new ClassSerializerInterceptor(app.get(Reflector)));

  app.useGlobalFilters(new DatabaseExceptionFilter());

  const config = new DocumentBuilder()
    .setTitle('Users microservice')
    .setVersion('0.0.1')
    .setDescription('The users microservice API description')
    .build();

  SwaggerModule.setup('docs', app, () => SwaggerModule.createDocument(app, config));

  await app.startAllMicroservices();

  const port = configService.get<number>('APP_PORT') ?? 3021;

  await app.listen(port);
}
bootstrap();
Enter fullscreen mode Exit fullscreen mode

Oh, looking back, I really skipped a lot of stuff… That might raise questions like “How did you set up this or that?” But as I mentioned before, this isn’t a guide on building a microservice app from scratch — it’s about setting up the monorepo. If you’re patient enough to read until the end, you’ll find a link to the repository that was used to create this post))


Unit and E2E testing

The unit tests are also pretty straightforward, and even easier if you don’t set up aliases for your microservice folders. Here’s an example of unit tests for the UsersService:

import * as utils from '@libs/common/utils';
import { KafkaMock } from '@libs/kafka/kafka.mock';
import { NotFoundException } from '@nestjs/common';
import { User } from '@users-micros/generated/prisma';
import { UsersService } from '@users-micros/users/users.service';
import { UsersTopics } from '@libs/kafka/messages/users.messages';
import { DatabaseService } from '@users-micros/database/database.service';

export const mockUsers: User[] = [
  {
    id: '1',
    name: 'John Doe',
    createdAt: new Date(),
    updatedAt: new Date(),
  },
  {
    id: '2',
    name: 'Jane Smith',
    createdAt: new Date(),
    updatedAt: new Date(),
  },
];

export const mockUser = mockUsers[0];

describe('UsersService', () => {
  let usersService: UsersService;
  let databaseService: jest.Mocked<DatabaseService>;
  let kafkaService: typeof KafkaMock;

  beforeEach(() => {
    databaseService = {
      user: {
        create: jest.fn().mockResolvedValue(mockUser),
        findFirst: jest.fn((args) =>
          args.where.id === mockUser.id ? Promise.resolve(mockUser) : Promise.resolve(null),
        ),
        findMany: jest.fn().mockResolvedValue(mockUsers),
        count: jest.fn().mockResolvedValue(mockUsers.length),
        update: jest.fn((args) =>
          args.where.id === mockUser.id ? Promise.resolve({ ...mockUser, ...args.data }) : Promise.resolve(null),
        ),
      },
    } as any;

    kafkaService = { ...KafkaMock };

    usersService = new UsersService(kafkaService as any, databaseService);

    jest.spyOn(utils, 'createSearchQuery');
    jest.spyOn(utils, 'createSortQuery');
  });

  describe('createUser', () => {
    it('should create a user', async () => {
      const data = { name: 'John Doe', email: 'john@example.com' };
      const result = await usersService.createUser(data as any);
      expect(databaseService.user.create).toHaveBeenCalledWith({ data });
      expect(result).toBe(mockUser);
    });
  });

  describe('findUsers', () => {
    it('should return users with filters', async () => {
      const query = { skip: 0, take: 10, search: 'John', sortBy: 'name', sortOrder: 'asc' } as any;
      const result = await usersService.findUsers(query);
      expect(utils.createSearchQuery).toHaveBeenCalledWith(query.search, expect.anything());
      expect(utils.createSortQuery).toHaveBeenCalledWith(query.sortBy, query.sortOrder);
      expect(databaseService.user.findMany).toHaveBeenCalled();
      expect(result).toEqual(mockUsers);
    });
  });

  describe('findUsersCount', () => {
    it('should return count of users', async () => {
      const query = { search: 'John' } as any;
      const result = await usersService.findUsersCount(query);
      expect(utils.createSearchQuery).toHaveBeenCalledWith(query.search, expect.anything());
      expect(databaseService.user.count).toHaveBeenCalled();
      expect(result).toBe(mockUsers.length);
    });
  });

  describe('findUser', () => {
    it('should return user if found', async () => {
      const result = await usersService.findUser(mockUser.id);
      expect(databaseService.user.findFirst).toHaveBeenCalledWith({ where: { id: mockUser.id } });
      expect(result).toBe(mockUser);
    });

    it('should throw NotFoundException if not found', async () => {
      await expect(usersService.findUser('no-id')).rejects.toThrow(NotFoundException);
    });
  });

  describe('updateUser', () => {
    it('should update user and emit kafka event', async () => {
      const data = { name: 'Updated Name' };
      const result = await usersService.updateUser(mockUser.id, data);
      expect(databaseService.user.update).toHaveBeenCalledWith({ where: { id: mockUser.id }, data });
      expect(kafkaService.emit).toHaveBeenCalledWith(UsersTopics.USER_UPDATED, { name: 'Updated Name' });
      expect(result.name).toBe('Updated Name');
    });

    it('should throw NotFoundException if user does not exist', async () => {
      await expect(usersService.updateUser('no-id', { name: 'test' })).rejects.toThrow(NotFoundException);
    });
  });
});
Enter fullscreen mode Exit fullscreen mode

E2E tests require a database connection because they simulate the real behavior of your microservice. It might be a good idea to first check the Containerization section of this post:

import * as request from 'supertest';
import { INestApplication, ValidationPipe } from '@nestjs/common';
import { Test, TestingModule } from '@nestjs/testing';
import { FixtureModule } from '@users-micros/test/fixture.module';
import { DatabaseService } from '@users-micros/database/database.service';

describe('UsersController (e2e)', () => {
  let app: INestApplication;
  let databaseService: DatabaseService;

  beforeAll(async () => {
    const moduleFixture: TestingModule = await Test.createTestingModule({
      imports: [FixtureModule],
    }).compile();

    app = moduleFixture.createNestApplication();
    app.useGlobalPipes(new ValidationPipe({ transform: true }));
    await app.init();

    databaseService = moduleFixture.get(DatabaseService);
  });

  beforeEach(async () => {
    await databaseService.user.deleteMany();
  });

  afterAll(async () => {
    await databaseService.user.deleteMany();
    await app.close();
  });

  describe('POST /users', () => {
    it('should create a user successfully', async () => {
      const userData = { name: 'John Doe' };

      const response = await request(app.getHttpServer()).post('/users').send(userData).expect(200);

      expect(response.body).toMatchObject({
        id: expect.any(String),
        name: userData.name,
        createdAt: expect.any(String),
        updatedAt: expect.any(String),
      });

      const user = await databaseService.user.findUnique({
        where: { id: response.body.id },
      });
      expect(user).toBeTruthy();
      expect(user?.name).toBe(userData.name);
    });

    it('should return 400 when name is missing', async () => {
      await request(app.getHttpServer()).post('/users').send({}).expect(400);
    });

    it('should return 400 when name is empty', async () => {
      await request(app.getHttpServer()).post('/users').send({ name: '' }).expect(400);
    });

    it('should return 400 when name is too long', async () => {
      await request(app.getHttpServer())
        .post('/users')
        .send({ name: 'a'.repeat(65) })
        .expect(400);
    });
  });

  describe('GET /users', () => {
    beforeEach(async () => {
      // Create test users
      await databaseService.user.createMany({
        data: [
          { name: 'Alice Smith' },
          { name: 'Bob Johnson' },
          { name: 'Carol Davis' },
          { name: 'David Wilson' },
          { name: 'Eve Brown' },
        ],
      });
    });

    it('should return paginated users', async () => {
      const response = await request(app.getHttpServer()).get('/users').query({ skip: 0, take: 10 }).expect(200);

      expect(response.body).toMatchObject({
        skip: 0,
        take: 10,
        total: 5,
        data: expect.arrayContaining([
          expect.objectContaining({
            id: expect.any(String),
            name: expect.any(String),
            createdAt: expect.any(String),
            updatedAt: expect.any(String),
          }),
        ]),
      });
      expect(response.body.data).toHaveLength(5);
    });

    it('should search users by name', async () => {
      const response = await request(app.getHttpServer()).get('/users').query({ search: 'alice' }).expect(200);

      expect(response.body.data).toHaveLength(1);
      expect(response.body.data[0].name).toBe('Alice Smith');
    });

    it('should sort users by name', async () => {
      const response = await request(app.getHttpServer())
        .get('/users')
        .query({ sortBy: 'name', sortOrder: 'asc' })
        .expect(200);

      const names = response.body.data.map((user) => user.name);
      expect(names).toEqual([...names].sort());
    });

    it('should return 400 for invalid sort field', async () => {
      await request(app.getHttpServer()).get('/users').query({ sortBy: 'invalid' }).expect(400);
    });
  });

  describe('GET /users/:userId', () => {
    let testUser;

    beforeEach(async () => {
      testUser = await databaseService.user.create({
        data: { name: 'Test User' },
      });
    });

    it('should return a user by id', async () => {
      const response = await request(app.getHttpServer()).get(`/users/${testUser.id}`).expect(200);

      expect(response.body).toMatchObject({
        id: testUser.id,
        name: testUser.name,
        createdAt: expect.any(String),
        updatedAt: expect.any(String),
      });
    });

    it('should return 404 for non-existing user', async () => {
      await request(app.getHttpServer()).get('/users/non-existing-id').expect(404);
    });
  });

  describe('PATCH /users/:userId', () => {
    let testUser;

    beforeEach(async () => {
      testUser = await databaseService.user.create({
        data: { name: 'Test User' },
      });
    });

    it('should update a user successfully', async () => {
      const updateData = { name: 'Updated Name' };

      const response = await request(app.getHttpServer()).patch(`/users/${testUser.id}`).send(updateData).expect(200);

      expect(response.body).toMatchObject({
        id: testUser.id,
        name: updateData.name,
        createdAt: expect.any(String),
        updatedAt: expect.any(String),
      });

      const updatedUser = await databaseService.user.findUnique({
        where: { id: testUser.id },
      });
      expect(updatedUser).toBeTruthy();
      expect(updatedUser?.name).toBe(updateData.name);
    });

    it('should return 404 for non-existing user', async () => {
      await request(app.getHttpServer()).patch('/users/non-existing-id').send({ name: 'New Name' }).expect(404);
    });

    it('should return 400 when update data is invalid', async () => {
      await request(app.getHttpServer()).patch(`/users/${testUser.id}`).send({ name: '' }).expect(400);

      await request(app.getHttpServer())
        .patch(`/users/${testUser.id}`)
        .send({ name: 'a'.repeat(65) })
        .expect(400);
    });
  });
});

Enter fullscreen mode Exit fullscreen mode

Containerization

For now, let’s just add a docker-compose.yml and a Dockerfile for each microservice. As a result, the project structure should look something like this:

📂 apps
 | --- 📂 users
 |      | --- 📂 src
 |      | --- .env
 |      | --- Dockerfile
 |      | --- docker-compose.yml
 |
 | --- 📂 posts
 |      | --- 📂 src
 |      | --- .env
 |      | --- Dockerfile
 |      | --- docker-compose.yml
 |
📂 libs
 | --- 📂 kafka
 |      | --- 📂 src
 |      | --- .env
 |      | --- docker-compose.yml
Enter fullscreen mode Exit fullscreen mode

Here’s an example of my docker-compose.yml. When developing locally, I’ll be honest — I usually comment out the app service because I don’t really need it:

services:
  app:
    container_name: ${APP_NAME}-app
    build:
      context: ../..
      dockerfile: apps/${APP_NAME}/Dockerfile
      args:
        APP_NAME: ${APP_NAME}
    restart: on-failure
    env_file:
      - ./.env
    ports:
      - '${APP_PORT}:${APP_PORT}'
    depends_on:
      - database

  database:
    container_name: ${APP_NAME}-database
    image: postgres:latest
    restart: always
    env_file:
      - ./.env
    environment:
      POSTGRES_DB: '${DATABASE_NAME}'
      POSTGRES_USER: '${DATABASE_USER}'
      POSTGRES_PASSWORD: '${DATABASE_PASSWORD}'
    ports:
      - '${DATABASE_PORT}:5432'
    volumes:
      - database:/var/lib/postgresql/data

volumes:
  database:
    driver: local
Enter fullscreen mode Exit fullscreen mode

Here are also examples of the Dockerfile and .env. Honestly, I’m not sure what else should be added here:

FROM node:22-alpine AS base 

# build stage
FROM base AS build 
ARG APP_NAME 
ARG NODE_ENV=development 
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app 
COPY package*.json ./
RUN npm ci
COPY . .
RUN npx prisma generate --schema apps/${APP_NAME}/prisma/schema.prisma \
    && npm run build ${APP_NAME}

# production stage
FROM base AS production 
ARG APP_NAME 
ARG NODE_ENV=production 
ENV NODE_ENV=${NODE_ENV} HUSKY=0
WORKDIR /usr/src/app 
COPY package*.json ./
RUN npm ci --only=production
COPY --from=build /usr/src/app/dist ./dist
COPY --from=build /usr/src/app/node_modules/.prisma ./node_modules/.prisma
COPY --from=build /usr/src/app/apps/${APP_NAME}/prisma ./apps/${APP_NAME}/prisma
COPY --from=build /usr/src/app/apps/${APP_NAME}/generated ./apps/${APP_NAME}/generated

ENV APP_MAIN_FILE=dist/apps/${APP_NAME}/main
ENV DATABASE_SCHEMA=apps/${APP_NAME}/prisma/schema.prisma
CMD ["sh", "-c", "npx prisma migrate deploy --schema ${DATABASE_SCHEMA} && node ${APP_MAIN_FILE}"]
Enter fullscreen mode Exit fullscreen mode
# APP
APP_NAME=users
APP_PORT=3021

# Database
DATABASE_PORT=3022
DATABASE_NAME="postgresql"
DATABASE_USER="johndoe"
DATABASE_PASSWORD="randompassword"
DATABASE_URL="postgresql://johndoe:randompassword@database:5432/postgresql"

# Kafka
KAFKA_URL=localhost:3011
Enter fullscreen mode Exit fullscreen mode

Setting up Kafka is up to you, of course. 😉 If you’d rather have it all ready for local development, check out the repo link at the end of the post.

Also I need to mention, if you want to set up all apps locally without dokcer, you need to be careful with environment variables like DATABASE_URL and KAFKA_URL, as they need to be updated to match your local setup.


CI/CD Testing

For testing in CI/CD, I’ll be using GitHub Actions. If you’re not familiar with them, please read the documentation.

To efficiently run tests inside a monorepo, we need to detect changes across apps and libraries. Therefore, any action that involves testing should start with this:

name: E2E Testing

on:
  push:
    branches: [master]
  pull_request:
    branches: [master]

jobs:
  detect-changes:
    runs-on: ubuntu-latest
    outputs:
      users: ${{ steps.filter.outputs.users }}
      posts: ${{ steps.filter.outputs.posts }}
      common: ${{ steps.filter.outputs.common }}
    steps:
      - uses: actions/checkout@v4

      - name: Detect changes
        id: filter
        uses: dorny/paths-filter@v3
        with:
          filters: |
            users:
              - 'apps/users/**'
              - 'libs/common/**'
            posts:
              - 'apps/posts/**'
              - 'libs/common/**'
            common:
              - 'libs/common/**'
Enter fullscreen mode Exit fullscreen mode

Here are a few grammatically correct and natural rewrites of your sentence — depending on tone:

test-users:
    needs: detect-changes
    if: ${{ needs.detect-changes.outputs.users == 'true' }}
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:latest
        env:
          POSTGRES_USER: johndoe
          POSTGRES_PASSWORD: randompassword
          POSTGRES_DB: postgresql-test
        ports:
          - 3022:5432
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: 'npm'
      - run: npm ci
      - run: chmod -R +x ./scripts
      - run: npm run database:generate users
      - run: npm run database:push users test
      - run: npm run test:e2e users

  test-posts:
    needs: detect-changes
    if: ${{ needs.detect-changes.outputs.posts == 'true' }}
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:latest
        env:
          POSTGRES_USER: johndoe
          POSTGRES_PASSWORD: randompassword
          POSTGRES_DB: postgresql-test
        ports:
          - 3032:5432
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: 'npm'
      - run: npm ci
      - run: chmod -R +x ./scripts
      - run: npm run database:generate posts
      - run: npm run database:push posts test
      - run: npm run test:e2e posts
Enter fullscreen mode Exit fullscreen mode

You can see that I’m using scripts such as npm run database:generate posts and npm run test:e2e posts. Relevant examples are available in the repository linked below)


Conclusion

That’s it for now regarding the NestJS monorepo! I hope you found this post useful. Since you made it to the end, here’s the link to the monorepo used in this post. I originally built it to prove a concept to my team, but now that it has served its purpose — feel free to use it!

Top comments (0)