DEV Community

Krishna Desiraju
Krishna Desiraju

Posted on

Designing Highly Scalable Serverless Event Scheduler

Some of the business use cases often require a lightweight event scheduling instead of a full blown workflow. For instance, an OTT service that wants to remind the customers of their favorite shows 15 minutes before the streaming with a notification. So each customer would have different favorites and with different reminder timings. So in most cases like these, an Event Scheduler that acts as a timer would be helpful which can be plugged in with a handler to process it appropriately.

While we can certainly use queues like rabbitmq, aws sqs, gcp pub-sub for delayed messaging all of them have their own maximum delay time. For instance, aws sqs max visibility delay is 15 mins. gcp currently has a tasks API that schedule task up to an year. And if we want to schedule something a month or a quarter later or an year later, we'll have to build something ourselves.

So here is one architecture of a Highly Scalable and Fully Serverless Event Scheduler using aws api gateway, lambda, dynamodb, eventbridge cron trigger, and sqs. The idea primarily is to build an event scheduler that is all serverless and that can work at any scale something in the order of million per hour if need be. Also this is inspired from Yan Cui's post here extending on to make the precision better without hot spots.

Highly Scalable Serverless Event Scheduler

Creating Scheduled Events

In order to make event creation highly scalable we are using dynamodb fronted by API Gateway with lambda. Though API gateway alone is sufficient to write to dynamodb, lambda is mainly used to pre-batch events in to 1min window and stamp their batch_epoch_time on them.
For instance if there's an event scheduled for 01:11:50 (hh:mm:ss), we stamp batch_epoch_time as 01:11:00 (in epoch though).

And this makes the job of sweeper easy to just query the dynamodb table for unprocessed batches without having to worry about locking mechanism and isolation.

Even if we were to use sql for any reason whatsoever, which is obviously not scalable though can be serverless, we don't have use the painful sql pessimistic locking mechanism if we are pre-batching them when they are created or updated.

And the lambda can also be used for performing any basic validations on the request.

Sweeping Scheduled Events

Sweeper is essentially an eventbridge cron trigger that triggers the lambda every one minute. Lambda sweeps the dynamodb for all the events that will hit the schedules in the next 5 mins based on the batch_epoch_time.
Note that we are sweeping ahead of time rather than ones that are past scheduled time as this allows us to process events at scale without causing delays. And you will also see that when we get to last step of sending it back to the client application we make sure it is sent with a compensating delay that gives us precision down to second as opposed to the sweeper frequency of 1 min.

And in order to optimize sweeping dynamodb for efficient retrieval of events based on batch_epoch_time, we'll have to create a Global Secondary Index (GSI) on batch_epoch_time. And this enables us to query the table rather than scanning it.

In order to make the sweeper highly scalable, lambda batches the swept events in batches of 100 and drops them on SQS queue for further processing so that this lambda doesn't end up processing all the events especially when it sweeps thousands and thousands. Also please note that max payload size of a query response is 1MB, so the lambda has to loop until it sweeps all the events in that one minute windows it is processing.
And to make this little better, we can use a different table that tracks all the batch_epoch_times that are getting processed or completed and refer to this table to find the batches in the next 5 mins that the lambda has to sweep next.
Also relaying the message to sqs instead of processing helps in prevents hot spots and timeouts in lambda processing.

And SQS queue is polled by another lambda that processes these batches of 100 events and sends to the client application's SQS queue specified in the request adding a delay that calculated based on the current time and the actual scheduled time as we sweeping it ahead of time. Since SQS support message visibility delay up to 15 mins and since we are pulling about 5 mins ahead of time, we are specifying the compensating delay time.

And in case of event that is scheduled within the next 5 mins, the API Gateway lambda can directly send it to the client sqs queue bypassing the sweeper which gives us the precision even if something is scheduled a few seconds later.

In addition to the serverless technologies, pre-batching the events, having a single sweeper instead of multiple workers and relaying them with delay for precision are key in keeping the architecture simple.

So this is how the above architecture helps us to create fully serverless and highly scalable event scheduler.

Please let me know your thoughts about this in the comments below. Thanks.

Top comments (1)

Collapse
 
enzomar profile image
enzomar

Hi, interesting article, I do have a question by the way. Maybe a naive quesiton. How do you hanlde the case you want/need to scaleup the Sweeper Lambda, do not you risk to send the event multiple times?