DEV Community

Cover image for Designing a Highly Scalable Chat Application for Handling High User Loads in NestJS
Hien Nguyen Minh
Hien Nguyen Minh

Posted on • Edited on

Designing a Highly Scalable Chat Application for Handling High User Loads in NestJS

In this blog post, I am excited to share my insights and experiences in designing a scalable chat application capable of handling high user loads using the powerful framework, NestJS.

A working repository is available here.

Let's get started! 🐈

What Enables Scalability in My Design?

In order to achieve scalability in our chat application built with NestJS, we consider several key factors:

1. Maintaining Correct Functionality:

Our application ensures that even with horizontal scaling across multiple instances, the desired functionality remains intact. This means that as we expand our infrastructure to handle increasing user loads, the application continues to perform as expected.

2. Efficient Message Delivery

To optimize resource usage and enhance performance, our application efficiently delivers messages or events only to the relevant instances. This approach minimizes unnecessary invocations, reducing the overhead and ensuring effective communication between the different components of the application.

3. Loose Coupling and Microservices

The architecture of our application is designed with loose coupling between services, following the Dependency Inversion Principle in NestJS. This loose coupling allows for easy decomposition of the application into microservices with minimal effort. Each microservice can operate independently, facilitating scalability and adaptability as the system grows.

4. Clean Architecture Implementation

By implementing the principles of Clean Architecture, our application becomes framework-independent. This means it can seamlessly transition between different frameworks, such as migrating from MongoDB to PostgreSQL, without requiring modifications to the core entity and business logic. The Clean Architecture approach helps decouple the application's internal layers, making it more maintainable and flexible.

Implementing a Scalable Application

Note: In my application, I implemented two versions of the PubSub service: one utilizing GraphQL Subscriptions and the other using Socket.IO. The PubSub service will be further described in detail later in this blog. By setting the PUBSUB_SERVICE_PROVIDER in the .env file, you can seamlessly switch between these options.

1. Maintaining Correct Functionality

By default, the event-publishing system in both GraphQL Subscription and Socket.IO is in-memory. This means that events published by one instance are not received by clients handled by other instances. When scaling to multiple server instances, it becomes necessary to replace the default in-memory system with another implementation that ensures events are properly routed to all clients. There are various options available, such as Redis PubSub, Google PubSub, MongoDB PubSub, PostgreSQL PubSub, and more. In this example, we will utilize Redis PubSub.

The diagram below provides a visual representation of how it works:

Redis PubSub

Implementation

  • GraphQL Subscription


@Module({
  providers: [
    {
      provide: PUB_SUB_TOKEN,
      useFactory: (envConfigService: EnvConfigService) => {
        const options: RedisOptions = {
          retryStrategy: (times) => {
            // reconnect after
            return Math.min(times * 50, 2000);
          },
        };
        const redisConnectionString =
          envConfigService.getRedisConnectionString();

        return new RedisPubSub({
          publisher: new Redis(redisConnectionString, options),
          subscriber: new Redis(redisConnectionString, options),
        });
      },
      inject: [EnvConfigService],
    },
  ],
  exports: [PUB_SUB_TOKEN],
})
export class RedisPubSubModule {}


Enter fullscreen mode Exit fullscreen mode
  • Socket.IO


export class RedisIoAdapter extends IoAdapter {
  private adapterConstructor: ReturnType<typeof createAdapter>;

  constructor(
    app: INestApplicationContext,
    private readonly envConfigService: EnvConfigService,
  ) {
    super(app);
  }

  async connectToRedis(): Promise<void> {
    const options: RedisOptions = {
      retryStrategy: (times) => {
        // reconnect after
        return Math.min(times * 50, 2000);
      },
    };
    const redisConnectionString =
      this.envConfigService.getRedisConnectionString();
    const pubClient = new Redis(redisConnectionString, options);
    const subClient = new Redis(redisConnectionString, options);

    this.adapterConstructor = createAdapter(pubClient, subClient);
  }

  createIOServer(port: number, options?: ServerOptions): any {
    const server = super.createIOServer(port, options);
    server.adapter(this.adapterConstructor);
    return server;
  }
}


Enter fullscreen mode Exit fullscreen mode

2. Efficient Message Delivery

Consider an approach that involves grouping events based on common topics, such as "notification," "channelMessage," "directMessage," and more. Users have the ability to subscribe to these topics, and when an event takes place, it is published to the corresponding topic. However, this approach distributes the entire application's traffic to all instances, even though only a portion of it is relevant. For instance, if there are five instances, each instance will receive 100% of the traffic. Typically, only 20% of the traffic is relevant to its operations (100/5 = 20%). As a result, 80% of the traffic is being sent to instances that don't require it.

To optimize resource usage and enhance performance, my approach takes a different route. Instead of using common topics, I create an individual topic for each user, following the format of "event:{userId}." When an event occurs, it is simply published to the corresponding user's topic, such as "event:123" for user 123. This ensures that only the instances responsible for handling that specific user will be invoked. By implementing this optimization, resources are utilized efficiently, reducing unnecessary invocations and improving overall performance.

Implementation

  • GraphQL subscription


  subscribeToEventTopic(subscriber: ISubscriber) {
    const { id: subscriberId, userId } = subscriber;
    ...
    const eventTopic = this.getEventTopic(userId);
    return this.pubSub.asyncIterator(eventTopic);
  }
  ...
  private getEventTopic(userId: string) {
    return `event:${userId}`;
  }


Enter fullscreen mode Exit fullscreen mode
  • Socket.IO


  async handleConnection(client: Socket) {
    const authToken = client.handshake.auth.token;
    if (!_.isString(authToken)) {
      client.disconnect();
      return;
    }

    const isValidToken = await this.isValidToken(authToken);
    if (!isValidToken) {
      client.disconnect();
      return;
    }

    const user = this.parseToken(authToken);

    const userRoomName = this.getUserRoomName(user.id);
    client.join(userRoomName); // Here

    const subscriber: ISubscriber = {
      id: crypto.randomUUID(),
      userId: user.id,
      username: user.username,
      tokenExpireAt: user.exp,
    };
    client.data.subscriber = subscriber;
  }
  ...
  private getUserRoomName(userId: string) {
    return `event:${userId}`;
  }


Enter fullscreen mode Exit fullscreen mode

3. Loose Coupling and Microservices

In my application, I have two main services. The first service handles CRUD operations for messages and other related tasks. The second service is responsible for real-time event publishing to users, known as the PubSub service. To ensure a decoupled architecture, I employ the Dependency Inversion Principle by abstracting my services. For the PubSub service, I have created the IPubSubService interface, which has two implementations. One implementation utilizes GraphQL Subscriptions, while the other uses Socket.IO. Switching between these implementations is as simple as setting the value of PUBSUB_SERVICE_PROVIDER to either "GRAPHQL_SUBSCRIPTION" or "SOCKET.IO". This seamless switch exemplifies the decoupling of the services.

Suppose we decide to break down our application into microservices. In that scenario, creating a new microservice that implements the IPubSubService interface becomes a straightforward task. By importing the corresponding module into the main module, we can seamlessly integrate the new microservice with minimal effort and without requiring any modifications to the underlying logic. This flexible design empowers easy expansion and adaptability as our application evolves into a microservices architecture.

Let's see everything in action with the following diagram:

PubSub Service

Implementation

1. Create the IPubSubService interface and PUBSUB_SERVICE_TOKEN.



const PUBSUB_SERVICE_TOKEN = Symbol('PUBSUB_SERVICE_TOKEN');
interface IPubSubService {
  publishChannelMessageEvent(
    params: IPublishChannelMessageParams,
  ): Promise<void>;

  publishDirectMessageEvent(params: IPublishDirectMessageParams): Promise<void>;
}


Enter fullscreen mode Exit fullscreen mode

2. In the main service we inject PUBSUB_SERVICE_TOKEN.



@Injectable()
export class MessageUseCase {
  constructor(
    @Inject(DATA_SERVICE_TOKEN)
    private readonly dataService: IDataService,
    @Inject(PUBSUB_SERVICE_TOKEN)
    private readonly pubSubService: IPubSubService,
  ) {}

  async createOne(params: {
    senderId: string;
    createMessageInput: CreateChannelMessageInput;
  }): Promise<Message> {
    ...
  }
}


Enter fullscreen mode Exit fullscreen mode

3. Import the PubSubServiceModule into the main service.



@Module({
  imports: [DataServiceModule, PubSubServiceModule],
  providers: [MessageUseCase],
  exports: [MessageUseCase],
})
export class MessageModule {}


Enter fullscreen mode Exit fullscreen mode

4. In the PubSubServiceModule, check the value of PUBSUB_SERVICE_PROVIDER to import the corresponding provider.



const pubsubModuleProvider =
  process.env.PUBSUB_SERVICE_PROVIDER === PUBSUB_SERVICE_PROVIDER['SOCKET.IO']
    ? SocketIOModule
    : GraphQLSubscriptionModule;

@Module({
  imports: [pubsubModuleProvider],
  exports: [pubsubModuleProvider],
})
export class PubSubServiceModule {}


Enter fullscreen mode Exit fullscreen mode

5. Implement the IPubSubService interface and export PUBSUB_SERVICE_TOKEN in each provider service.

  • GraphQL Subscription

 typescript
@Module({
  imports: [DataServiceModule, RedisPubSubModule],
  providers: [
    SubscriptionResolver,
    GraphQLSubscriptionService,
    {
      useExisting: GraphQLSubscriptionService,
      provide: PUBSUB_SERVICE_TOKEN,
    },
  ],
  exports: [PUBSUB_SERVICE_TOKEN],
})
export class GraphQLSubscriptionModule {}


Enter fullscreen mode Exit fullscreen mode
  • Socket.IO


@Module({
  imports: [DataServiceModule],
  providers: [
    SocketIOGateway,
    SocketIOService,
    {
      useExisting: SocketIOService,
      provide: PUBSUB_SERVICE_TOKEN,
    },
  ],
  exports: [PUBSUB_SERVICE_TOKEN],
})
export class SocketIOModule {}


Enter fullscreen mode Exit fullscreen mode

4. Clean Architecture Implementation

In his book "Clean Architecture" Uncle Bob emphasizes the importance of placing the business logic at the core of our architecture. This allows the business logic to remain independent of external changes, ensuring its stability and flexibility.

The following graph clearly represents this concept:

Clean Architecture

Implementation

In my application, I organize the core entities in the core folder and the use cases (business logic) in the use-cases folder. All other layers, such as controllers, resolvers and frameworks, depend on these two core layers, while the core layers remain independent of any other layer.

Let's take a look at the Message entity:



export interface IMessage extends IBaseEntity {
  content?: string;

  senderId?: string;

  sender?: IUser;

  channelId?: string;

  channel?: IChannel;
}


Enter fullscreen mode Exit fullscreen mode

Now, let's examine the framework that implements this entity. In this example, I am using the Mongoose schema:



@Schema({ ...baseSchemaOptions, collection: MESSAGE_COLLECTION })
@ObjectType()
class Message extends BaseSchema implements IMessage {
  @Prop()
  @Field()
  content?: string;

  @Prop({ type: mongoose.Schema.Types.ObjectId })
  @Field(() => String)
  senderId?: string;

  @Prop({})
  sender?: User;

  @Prop({ type: mongoose.Schema.Types.ObjectId })
  @Field(() => String)
  channelId?: string;

  @Prop({})
  channel?: Channel;
}


Enter fullscreen mode Exit fullscreen mode

As you can observe, the core entity is simply an interface that remains independent and doesn't have any dependencies. It is the responsibility of the framework (such as Mongoose) to implement this core entity. So, let's say we decide to switch from MongoDB with Mongoose to PostgreSQL with TypeORM. All we need to do is have the new framework (TypeORM) implement the core entity. This highlights the framework independence of our entity, allowing for seamless transitions between different technologies without compromising the functionality or structure of our application.

Next, let's take a look to Message use-case.



@Injectable()
export class MessageUseCase {
  constructor(
    @Inject(DATA_SERVICE_TOKEN)
    private readonly dataService: IDataService,
    @Inject(PUBSUB_SERVICE_TOKEN)
    private readonly pubSubService: IPubSubService,
  ) {}

  async createOne(params: {
    senderId: string;
    createMessageInput: CreateChannelMessageInput;
  }): Promise<Message> {
  ...
  }
}


Enter fullscreen mode Exit fullscreen mode

In most cases, our business logic needs to interact with the database for reading and writing data. However, directly injecting the database model into the use-cases violates the principles of Clean Architecture, as it introduces a dependency on the specific framework. This means that when we switch to a different database model from a different framework, we would need to modify the use-cases as well. To overcome this, we can utilize the Dependency Inversion Principle (DIP) once again. By injecting the IDataService into our use-cases, the use-cases now depend on the IDataService interface. The responsibility of implementing this interface lies with the data access framework.

Take a look at the IDataService interface:



export const DATA_SERVICE_TOKEN = Symbol('DATA_SERVICE_TOKEN');
export interface IDataService {
  users: IRepositories.IUserRepository;

  messages: IRepositories.IMessageRepository;

  channels: IRepositories.IChannelRepository;

  workspaces: IRepositories.IWorkspaceRepository;

  members: IRepositories.IMemberRepository;

  channelMembers: IRepositories.IChannelMemberRepository;
}


Enter fullscreen mode Exit fullscreen mode

Here is an example of the implementation of IDataService:



@Injectable()
export class MongooseDataService implements IDataService {
  constructor(
    @Inject(IRepositories.USER_REPOSITORY_TOKEN)
    public readonly users: IRepositories.IUserRepository,

    @Inject(IRepositories.MESSAGE_REPOSITORY_TOKEN)
    public readonly messages: IRepositories.IMessageRepository,

    @Inject(IRepositories.MEMBER_REPOSITORY_TOKEN)
    public readonly members: IRepositories.IMemberRepository,

    @Inject(IRepositories.CHANNEL_REPOSITORY_TOKEN)
    public readonly channels: IRepositories.IChannelRepository,

    @Inject(IRepositories.WORKSPACE_REPOSITORY_TOKEN)
    public readonly workspaces: IRepositories.IWorkspaceRepository,

    @Inject(IRepositories.CHANNEL_MEMBER_REPOSITORY_TOKEN)
    public readonly channelMembers: IRepositories.IChannelMemberRepository,
  ) {}
}


Enter fullscreen mode Exit fullscreen mode

The framework is then responsible for implementing the repository interface:



@Injectable()
export class MessageRepository
  extends BaseRepository<Message>
  implements IMessageRepository
{
  constructor(
    @InjectModel(Message.name)
    readonly messageModel: Model<Message>,
  ) {
    super(messageModel);
  }
  ...
}


Enter fullscreen mode Exit fullscreen mode

The reason for using IDataService instead of injecting the repository directly into the use-cases is to avoid having a lot of dependencies in the use-cases. By injecting IDataService, we gain access to all the repositories.

Now, let's see everything in action with the following diagram:

Data Service

As depicted, our use-cases have achieved framework independence. We can seamlessly transition to any database without modifying the core business logic. Adhering to the principles of Clean Architecture ensures a flexible and maintainable application structure.

Conclusion

That's it 🚀. We have just covered an overview of designing a scalable chat application using NestJS. In this blog, I assumed that you already have experience in the software industry, so I didn't delve too deeply into concepts like the Dependency Inversion Principle or Clean Architecture. Exploring all of these concepts in a single blog post would make it excessively long and overwhelming with information. The primary focus of this blog was to highlight the key factors that contribute to the scalability of our architecture. If you found this blog helpful and would like me to provide more in-depth explanations of the concepts mentioned, please feel free to let me know in the comment section.

Happy hacking! 🐈

Top comments (0)