DEV Community

Rajdip Bhattacharya for keyshade

Posted on

How keyshade employs the use of sockets in distributed environment

Any web application that wants to integrate real-time responsiveness into their operations, will surely use web sockets in some way or the other. Socket programming is, nonetheless, difficult to implement, hence a library of softwares exist to ease our life a little bit.

But there are scenarios when you can't use these tools due to the lack of flexibility, and you need to manually write your socket code.

We were challenged with a similar problem at keyshade!

The problem

At the very core of our project, we have real-time notifications to client applications. This means, if your application is configured with keyshade, it will receive live updates regarding changes that happen in your configurations. This implies that your application will need to maintain a socket connection to our servers, which, by no means, is feasible (even if possible) to externalize our sockets. And so, our sockets are home baked.

But, the plot thickens further!

Adding to the problem

But before we could implement this, we were met by yet another (major) blocker. Consider this scenario:

websocket-flow-1

We are having an API cluster that has 3 instances. This cluster sits behind a load balancer, through which all connections are multiplexed (which I have dropped deliberately to reduce complexity).

In this case, your client application (where you want to get live configuration updates) has connected to Server A. But, when you are making changes, they are sent to Server C.

This raises the concern about the scalability of web sockets in a distributed environment. As you can infer, sockets that are connected to Server A won't receive any updates from Server C.

This brings us to our next section: the solution.

The solution

After spending an entire day searching for the correct solution, I found none. There was not a single way on that I could "share" socket connections among various servers. So, we brought Redis' PubSub into the picture.

redis pubsub

The very fundamental of this approach was this: whenever a server started up, we would make it register to the change-notifier topic of Redis.

async afterInit() {
  this.logger.log('Initialized change notifier socket gateway')
  await this.redisSubscriber.subscribe(
    'change-notifier',
    this.notifyConfigurationUpdate.bind(this)
  )
}
Enter fullscreen mode Exit fullscreen mode

Next up, whenever a configuration's (secret or variable) value was changed, we would push an even to this channel:

await this.redis.publish(
  'change-notifier',
  JSON.stringify({
    environmentId: variable.environmentId,
    name: variable.name,
    value: dto.value,
    isSecret: false
  })
)
Enter fullscreen mode Exit fullscreen mode

This flow allowed us to achieve an architecture that is both scalable and ensures high availability!

Top comments (0)