DEV Community

Cover image for Part 1/3: How to Scale a Chat App to Millions of Users in NestJS
zenstok
zenstok

Posted on • Updated on

Part 1/3: How to Scale a Chat App to Millions of Users in NestJS

Hello, in this guide, I will teach you how to scale a NestJS app to millions of users.

Prerequisites:

  • Experience with JavaScript.
  • Experience with NestJS.
  • Experience as a backend engineer.

Access the GitHub repository for this project here.

To scale your app, you will need to scale horizontally by creating multiple instances of your NestJS app.

Each instance of your app should have the following features:

  1. WebSocket integrations that keep track of connected clients through websockets in a singleton memory array. (this is done by default when using nestjs, reference: Injection Scopes)
  2. A Redis database publisher that announces new requests received by all other instances and a subscriber that listents to these requests. This is necessary because a client may send a request to one instance (e.g., instance number 3) and be registered as a websocket client there. However, the client may then send a request to another instance because you will have a load balancer installed (e.g., instance number 32), triggering a websocket chat notification. Unfortunately, this instance does not have the connected client in its singleton memory array. To handle this, the instance needs to announce the request to all other instances through Redis. The other instances can then look up their array of connected clients and notify them if a match is found.

Here's how you can achieve this:

  1. Install websockets in your NestJS app. Follow the official NestJS guide. If you'd like to quickly jump out and see the GitHub repo for the full example, you can find it here.

  2. Create the WebSocket client manager service (explained below).

After setting up websockets in your app, we need to define an entrypoint gateway for websockets.

@WebSocketGateway({ path: '/entrypoint' })
export class EntrypointGateway {}
Enter fullscreen mode Exit fullscreen mode

Then, create a new lifecycle gateway that extends this entrypoint. This lifecycle gateway contains the logic to insert clients into the WebSocket client manager.

@Injectable()
export class LifecycleGateway
  extends EntrypointGateway
  implements OnGatewayInit, OnGatewayConnection, OnGatewayDisconnect
{
  private readonly logger = new Logger(LifecycleGateway.name);

  constructor(
    private readonly jwtService: JwtService,
    private readonly wsClientManager: WsClientManager,
  ) {
    super();
  }

  afterInit() {
    this.logger.debug('Websockets initialized ' + this.constructor.name);
  }

  handleConnection(client: any) {
    const authUserTokenData = this.getDecodedAuthToken(client);

    if (!authUserTokenData) {
      client.close();
      return;
    }

    this.wsClientManager.addConnection(client, authUserTokenData);
  }

  handleDisconnect(client: any) {
    this.wsClientManager.removeConnection(client);
  }

  getDecodedAuthToken(client: any) {
    let decodedJwt: DecodedAuthToken | null = null;

    try {
      if (client.protocol) {
        decodedJwt = this.jwtService.verify(client.protocol, {
          secret: jwtConstants.secret,
        }) as DecodedAuthToken;
      }
    } catch (e) {}

    return decodedJwt;
  }
}
Enter fullscreen mode Exit fullscreen mode

To inject the Redis connection into our app, we utilize @liaoliaots/nestjs-redis package with the following yarn command:

 yarn add @liaoliaots/nestjs-redis
Enter fullscreen mode Exit fullscreen mode

Moving forward, we will add the Redis module configuration for subscriber and publisher channels.

RedisModule.forRootAsync({
  inject: [ConfigService],
  useFactory: async (config: ConfigService) => ({
    config: [
      {
        namespace: 'subscriber',
        host: config.get('REDIS_HOST'),
        port: config.get('REDIS_PORT'),
      },
      {
        namespace: 'publisher',
        host: config.get('REDIS_HOST'),
        port: config.get('REDIS_PORT'),
      },
    ],
  }),
}),
Enter fullscreen mode Exit fullscreen mode

In the WebSocket client manager, you will find the following:

  • We generate a random Redis client using the crypto module's randomUUID from the official Node.js package.

Each NestJS app instance subscribes to the three Redis channels we defined:

  • SendWsMessageToAllClients
  • SendWsMessageToSomeClients
  • SendWsMessageToOneClient

When a NestJS instance receives a message, it first checks that the publisher is not the same as the subscriber. Then, it proceeds to send the message. However, this time, we set the shouldPublishToRedis parameter to false to avoid an infinite loop. The received message will be sent by each instance if it finds the clients in its singleton memory array.

This approach also handles cases where a single user uses the app through multiple devices simultaneously (e.g., PC, mobile app). The user will receive chat notifications on all devices because, upon inspecting the code, you will notice that we are utilizing a map object. For every user ID, we store all associated WebSocket connected clients.

@Injectable()
export class WsClientManager {
  private readonly connectedClients = new Map<string, any[]>();
  private readonly redisClientId = `ws_socket_client-${crypto.randomUUID()}`;

  constructor(
    @InjectRedis('subscriber') private readonly subscriberRedis: Redis,
    @InjectRedis('publisher') private readonly publisherRedis: Redis,
  ) {
    this.subscriberRedis.subscribe(
      RedisSubscribeChannel.SendWsMessageToAllClients,
    );
    this.subscriberRedis.subscribe(
      RedisSubscribeChannel.SendWsMessageToSomeClients,
    );
    this.subscriberRedis.subscribe(
      RedisSubscribeChannel.SendWsMessageToOneClient,
    );

    this.subscriberRedis.on('message', (channel, message) => {
      const data = JSON.parse(message) as RedisPubSubMessage;
      if (data.from !== this.redisClientId) {
        switch (channel) {
          case RedisSubscribeChannel.SendWsMessageToAllClients:
            this.sendMessageToAllClients(data.message, false);
            break;
          case RedisSubscribeChannel.SendWsMessageToSomeClients:
            this.sendMessageToClients(
              (data as RedisPubSubMessageWithClientIds).clientIds,
              data.message,
              false,
            );
            break;
          case RedisSubscribeChannel.SendWsMessageToOneClient:
            this.sendMessageToClient(
              (data as RedisPubSubMessageWithClientId).clientId,
              data.message,
              false,
            );
            break;
        }
      }
    });
  }

  addConnection(client: any, authUserTokenData: DecodedAuthToken) {
    const userId = authUserTokenData.sub;

    this.setUserIdOnClient(client, userId);
    const clientsPool = this.getClientsPool(client);
    this.connectedClients.set(
      userId,
      clientsPool ? [...clientsPool, client] : [client],
    );

    setTimeout(() => {
      client.close(); // This will trigger removeConnection from the LifecycleGateway's handleDisconnect
    }, this.getConnectionLimit(authUserTokenData));
  }

  removeConnection(client: any) {
    const clientsPool = this.getClientsPool(client);
    const newPool = clientsPool!.filter((c) => c !== client);

    if (!newPool.length) {
      this.connectedClients.delete(client.userId);
    } else {
      this.connectedClients.set(client.userId, newPool);
    }
  }

  private setUserIdOnClient(client: any, userId: string) {
    client.userId = userId;
  }

  private getClientsPool(client: any) {
    return this.connectedClients.get(client.userId);
  }

  private getConnectionLimit(tokenData: DecodedAuthToken) {
    return tokenData.exp * 1000 - Date.now();
  }

  getConnectedClientIds() {
    const clientIds: string[] = [];

    const iterator = this.connectedClients.keys();
    let current = iterator.next();
    while (!current.done)

 {
      clientIds.push(current.value);
      current = iterator.next();
    }

    return clientIds;
  }

  sendMessageToClient(
    clientId: string,
    message: string,
    shouldPublishToRedis = true,
  ) {
    if (shouldPublishToRedis) {
      this.publisherRedis.publish(
        RedisSubscribeChannel.SendWsMessageToOneClient,
        JSON.stringify({
          message,
          clientId,
          from: this.redisClientId,
        }),
      );
    }

    const clientPool = this.connectedClients.get(clientId);

    if (clientPool) {
      clientPool.forEach((client) => {
        client.send(message);
      });
    }
  }

  sendMessageToClients(
    clientIds: string[],
    message: string,
    shouldPublishToRedis = true,
  ) {
    if (shouldPublishToRedis) {
      this.publisherRedis.publish(
        RedisSubscribeChannel.SendWsMessageToSomeClients,
        JSON.stringify({
          message,
          clientIds,
          from: this.redisClientId,
        }),
      );
    }

    this.connectedClients.forEach((clientPool, clientId) => {
      if (clientIds.includes(clientId)) {
        clientPool.forEach((client) => {
          client.send(message);
        });
      }
    });
  }

  sendMessageToAllClients(message: string, shouldPublishToRedis = true) {
    if (shouldPublishToRedis) {
      this.publisherRedis.publish(
        RedisSubscribeChannel.SendWsMessageToAllClients,
        JSON.stringify({
          message,
          from: this.redisClientId,
        }),
      );
    }

    this.connectedClients.forEach((clientPool) => {
      clientPool.forEach((client) => {
        client.send(message);
      });
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

Now it's time to test our app. First, clone the project. Make sure you have Docker installed.

This is the Dockerfile for our NestJS app (represents a single instance):

Dockerfile:

FROM node:18.13.0-alpine AS development

WORKDIR /usr/src/app
COPY package.json ./
COPY yarn.lock ./

RUN yarn install --frozen-lockfile \
&& yarn cache clean

COPY . .

RUN yarn build

CMD yarn install; yarn start:debug;
Enter fullscreen mode Exit fullscreen mode

In the docker-compose file, we use our backend image generated from the Dockerfile above. We also have an Nginx server to act as a load balancer between app instances, Postgres as the database, and Redis as the centralized database used for communication between app instances. In this example, we simulate the presence of five NestJS app instances.

docker-compose.yml:

version: '3.7'

services:
  backend:
    image: scalable-chat-app-example-backend
    build:
      context: ./../
      dockerfile: ./docker/Dockerfile
      target: development
    volumes:
      - ./../:/usr/src/app
      - scalable-chat-app-example-backend-node-modules:/usr/src/app/node_modules
      - scalable-chat-app-example-backend-dist:/usr/src/app/dist
    ports:
      - '3000'
    networks:
      - mainnet
    depends_on:
      - postgres
      - redis
    restart: unless-stopped
    scale: 5

  nginx:
    container_name: scalable-chat-app-example-nginx-load-balancer
    image: nginx:latest
    volumes:
      - ./../nginx.conf:/etc/nginx/nginx.conf
    depends_on:
      - backend
    networks:
      - mainnet
    ports:
      - '3000:3000'

  postgres:
    container_name: scalable-chat-app-example-postgres-db
    image: postgres:15.0
    networks:
      - mainnet
    environment:
      TZ: ${DB_TIMEZONE}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_DB: ${DB_NAME}
      PG_DATA: /var/lib/postgresql/data
    ports:
      - '5432:5432'
    volumes:
      - scalable-chat-app-example-pgdata:/var/lib/postgresql/data

  redis:
    container_name: scalable-chat-app-example-redis-db
    image: redis:7.0.7
    networks:
      - mainnet
    expose:
      - '6379'
    volumes:
      - scalable-chat-app-example-redisdata:/data

networks:
  mainnet:

volumes:
  scalable-chat-app-example-pgdata:
  scalable-chat-app-example-backend-node-modules:
  scalable-chat-app-example-backend-dist:
  scalable-chat-app-example-redisdata:
Enter fullscreen mode Exit fullscreen mode

We also need the following Nginx configuration to handle WebSocket connections:

nginx.conf:

user nginx;
events {

    worker_connections 1000;
}
http {

    upstream app {

        server scalable-chat-app-example-backend-1:3000;
        server scalable-chat-app-example-backend-2:3000;
        server scalable-chat-app-example-backend-3:3000;
        server scalable-chat-app-example-backend-4:3000;
        server scalable-chat-app-example-backend-5:3000;
    }

    server {

        listen 3000;
        location / {

            proxy_pass http://app;
            # WebSocket support
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $http_connection;

        }
        client_max_body_size 1000M;
    }
}
Enter fullscreen mode Exit fullscreen mode

To set up the example, navigate to the repository directory and execute the following three commands:

Copy sample.env:

cp sample.env .env
Enter fullscreen mode Exit fullscreen mode

Run containers:

yarn dc up 
Enter fullscreen mode Exit fullscreen mode

Initiate docker database:

yarn dc-db-init
Enter fullscreen mode Exit fullscreen mode

If you want to clean the project you can run the following command:

yarn dc-clean
Enter fullscreen mode Exit fullscreen mode

Your database has 5 users initiated:

Each one of them has 1234 as password.

Open swagger at http://localhost:3000/docs.

Login multiple users so you obtain the bearer token needed for ws authentication.

Open as many browser tabs as you want and write the following code in each of them:

const ws = new WebSocket('ws://localhost:3000/entrypoint', 'introduceHereBearerTokenFromAuthLogin')
Enter fullscreen mode Exit fullscreen mode

Check out your users table from docker database and extract them as a json so you can use the array of users in the browser.

Now, type the following in any browser window:

 users.forEach(user => ws.send(JSON.stringify({event: 'send_chat_message_to_participant', data: {message: 'test', participantId: user.id}})));
Enter fullscreen mode Exit fullscreen mode

This will send individual chat message to all participants. You can also check docker logs to see it in action.

Regardless of which NestJS instance grabs the WebSocket connection, the messages are consistently sent to participants. This is how you scale a NestJS chat app to millions of users.

If you are interested in learning how to deploy this stack on Kubernetes, leave a comment and i will do the tutorial.

If you want to collaborate on potential start-up projects, you can reach me at: rares.tarabega27@gmail.com

Post Creation:
If you want to check out an improved and more efficient model for broadcasting messages between instances, as well as how to deploy this stack on Kubernetes, you can find Part 2 of this series here.

Top comments (7)

Collapse
 
sebastian_wessel profile image
Sebastian Wessel

Where is the point of scaling here?
If you send every message to every instance, and the instance needs to check if this message is relevant, than you will have 100% workload on every instance.
Your “Millions of Users” will produce a lot of messages, which are all passed to every instance.
You need to have some kind of broker upfront, where every message is sent to, and every instance only subscribes for relevant messages at the broker.

Collapse
 
zenstok profile image
zenstok

Hi there,

Thank you for pointing that out. I don't believe there will be a 100% workload on every instance. To illustrate this better, let's consider a scenario with 2 million users and 100 instances. Assuming an even distribution of users, we would have approximately 20,000 connections per instance. In the case of a peer-to-peer chat app, whenever a message is sent, each instance will execute the following code block:

const clientPool = this.connectedClients.get(clientId);

if (clientPool) {
  clientPool.forEach((client) => {
    // In a real-world scenario, this array would rarely have more than 1 element,
    // unless it's an app where users connect from multiple devices.
    client.send(message);
  });
}
Enter fullscreen mode Exit fullscreen mode

This block of code doesn't really look like 100% workload even if users are spamming the chat.

Also, although not covered in the tutorial, further improvements can be made. For example, we can store a map of instance references in Redis, associating them with connected clients using user IDs. Then, when we want to propagate a new event, we can check this map and send the event only to the instances that are associated with the recipient user ID. This Redis map would be continuously updated with each WebSocket connect/disconnect.

Collapse
 
sebastian_wessel profile image
Sebastian Wessel

The second part is the important one and the direction to go. Only sent messages to the instances which have the need to handle it.

With 100% workload I mean the traffic-Workload in first place.
If you send every message to every instance, every instance will have the whole traffic and putting 75% to trash because it’s not related.
A nicer approach could be, to remove redis here, and as an example, use a mqtt broker.
There, every instance publishes an incoming message to a topic like msg/user_id and instances can subscribe/unsubscribe for specific users as soon as they have a websocket connection for that user.
Also, you can split the whole thing. You can have microservices which only handle responses and stuff, and other microservices which are writing the messages to a database.

Thread Thread
 
zenstok profile image
zenstok

I completely agree with your perspective. I'd like to highlight a few key points:

I believe in avoiding premature optimization of an application that goes beyond the specified business logic. My approach is to initially create a monolithic application and scale it horizontally when necessary and when you anticipate reaching a bottleneck, that is the appropriate time to consider additional enhancements, such as implementing specialized microservices to handle specific tasks.

I think this approach allows you to have a better view of the parts of your app that really require scaling, something that cannot always be foreseen at the beginning.

Thread Thread
 
sebastian_wessel profile image
Sebastian Wessel

Yes, I totally agree with the monolithic approach at the beginning and to keep it simple as long as possible.
But this specific case is a more general architecture decision at the beginning, which should be made really carefully.

Take time here and maybe run some load tests and try out different ways.
It's not about "you should implement it this way to solve issues you currently don't have".

It's more about "if we run into issues, we simply can...".
Keep your options here.

I commented this article, not because I'm very smart.

I commented it, because I did similar things in the past and run into issues, where I ended up to refactor a bunch of stuff on architectural level which I had not to do, if I was thinking & testing more upfront.
I'm currently working on a framework purista.dev - which might not be very helpful atm for your use case, but follows your approach to have some monolithic application at the beginning and scale later.

Thread Thread
 
minghsu0107 profile image
Hao-Ming Hsu

@zenstok @sebastian_wessel My implementation (github.com/minghsu0107/go-random-chat) directly addresses the scalability issues discussed here. My implementation uses a microservices approach with a dedicated forwarding service that routes messages to only the subset of instances where the recipients are connected. This allows the system to scale horizontally while minimizing overhead.

Collapse
 
agborkowski profile image
AgBorkowski

Sure kbernetes will be fine ;)