DEV Community

Danny Reed
Danny Reed

Posted on • Edited on

Debounce in Event-Driven Serverless

Debounce

Debounce is a technique that seeks to consolidate several near-simultaneous events into one. It's related to fan-in, but debounce usually has a time component. One way to say it is, "If we get X related inputs within N seconds, only trigger one output." My favorite way to think of it is:

Image description

Use Case

In my case, I have several sensors that submit events when certain conditions occur. Each event will trigger some computations, but if several sensors submit an event within the same second, I just want to trigger one computation. Having N separate computations outputting N separate results and resulting in N separate rows in a data store can mean some rows are obsolete almost immediately after being created. It also means I have to store lots of extraneous data.

What I need is to debounce those events so that if N events come in "at once" then we just make one new row in the data store.

Challenges

This is a hard problem, especially in serverless. One reason it's hard is that in order to implement debounce, we need to find mechanisms to do two things:

  1. Deduplicate
  2. Delay

Approach

Deduplication: DynamoDB

DynamoDB will let us PutItem conditionally. This is important because we'll use a Condition Expression to prevent overwriting an existing record. Once we put an Item with a given key, we won't overwrite it. This means when we hook up a DynamoDB Stream to this table, it will only fire one event even if we send several PutItem calls down to DynamoDB at nearly the same time.

This snippet shows writing the record conditionally:

await db.send(new PutItemCommand({
    TableName: process.env.tableName,
    Key: marshall({
        pk: event.deviceId,
    }),
    ConditionExpression: 'attribute_not_exists(pk)', 
    Item: marshall({
        pk: event.deviceId,
        timestamp: event.disappearTime,
    }),
    ReturnConsumedCapacity: 'TOTAL',
}));
Enter fullscreen mode Exit fullscreen mode

If the call to PutItem would overwrite an existing record, the call will throw a ConditionalCheckFailedException which you should catch (verify the exception name) and ignore as valid behavior.

It's worth noting that deduplication is a helpful mechanism by itself. If you just want deduplication and not a full debounce solution, you can just use this part and then hook up your Lambda target.

Delay: SQS

This part of the solution isn't without its drawbacks. SQS is one of the only serverless offerings that allows us to delay a message (something we usually avoid in event-driven architectures). Part of the nature of most event-driven services is that they don't hold onto stuff for more time than necessary to process it and pass it on. We actually want to put the brakes on this message for a bit in order to allow any "at the same time" messages to finish coming in. SQS facilitates that.

To configure a delay, we'll just set DelaySeconds to whatever value you feel suits your use case. I chose 1 second for mine. Here's my CloudFormation snippet:

AppDelayQueue:
  Type: AWS::SQS::Queue
  Properties:
    DelaySeconds: 1
    QueueName: AppDelayQueue
    KmsMasterKeyId: alias/aws/sqs # Enable encryption
Enter fullscreen mode Exit fullscreen mode

Connecting it Up: DynamoDB Stream

We'll pipe our PutItem events to a Lambda using a DynamoDB Stream, and then the Lambda will use the SDK to enqueue the message in a delay queue. The code is very simple, and frankly I wish DynamoDB Streams would let you send messages directly to SQS, but alas, the options are currently Lambda and Kinesis Data Streams.

Edit: Nowadays, there's also EventBridge Pipes which can hook up to a DynamoDB Stream and then pipe your data into a wide variety of targets, including SQS Queues. That would be the more "managed" way to do things at this point.

Here's the simple code:

const {
    SQSClient,
    SendMessageCommand,
} = require('@aws-sdk/client-sqs');

const sqs = new SQSClient();
const QUEUE_URL = process.env.queueUrl;

exports.handler = async (event) => {
    const message = event.Records[0];

    console.log(JSON.stringify(message));

    // Parse out the relevant data
    const deviceId = message.dynamodb.NewImage.pk.S;
    const timestamp = parseInt(message.dynamodb.NewImage.timestamp.N);

    // Send message to SQS delay queue
    await sqs.send(new SendMessageCommand({
        MessageBody: JSON.stringify({
            deviceId,
            timestamp,
        }),
        QueueUrl: QUEUE_URL,
    }));
};

Enter fullscreen mode Exit fullscreen mode

Diagram

All together now:

Image description

Tips

  1. The nature of debounce involves setting a threshold for what should "count as one" so you'll always have messages falling just outside or just inside the threshold you set. Your app should be designed (downstream) so that getting a repeat event, say, 1.1 seconds after the first doesn't break stuff or corrupt data.

  2. You may want to disable retries on the DynamoDB Stream depending on your use case. Leaving this on will cause failed messages to get recycled forever, which in this case wasn't helpful. This can't be done through the AWS Console, weirdly. You'll need to do it via the CLI (or through IaC). Read about it here.

  3. If you want to measure how many records are getting debounced, you can add some code where you're PUTting to DynamoDB inside the catch. After you've confirmed that the exception name is ConditionalCheckFailedException you can either log a message or PUT to a CloudWatch metric for bonus points :) Just be careful with the latter option if you're dealing with tons of messages since CloudWatch metrics can be expensive!

FIFO Delay Queues with Deduplication?

Why don't you just use a FIFO queue from SQS with a delay? Using a deduplication ID (or content-based deduplication) you can achieve deduplication out-of-the-box. You can also build the delay right in. Honestly that's a really good option for most use cases.

The one really frustrating problem with it is that the deduplication feature doesn't just deduplicate within messages currently in the queue, but it deduplicates against all messages sent in the past 5 minutes. You currently cannot configure that to be shorter. In my use case, I actually need to send messages with the same deduplication ID within 5 minutes. I just want it deduplicated against what's actually in the queue at the time.

Conclusion

Event-driven architectures are very powerful, but debounce is hard to achieve. Thanks to the combination of DynamoDB and SQS, we can achieve this behavior and minimize extraneous data processing and storage.

Thoughts and criticisms welcome! Let me know how you're solving these challenges in your shop!

Top comments (1)

Collapse
 
codejonesw profile image
William Jones • Edited

Nice work!!