DEV Community

Cover image for Query your EventBridge Scheduled Events in DynamoDB
Ian
Ian

Posted on

Query your EventBridge Scheduled Events in DynamoDB

Querying Your EventBridge Scheduled Events

EventBridge Scheduled Events are a fantastic addition to the AWS event driven ecosystem. However, they do have an issue... observability. If you’re looking for a reliable way to track these events without the issues of the EventBridge API, I’ve got a practical solution for you.

Introduction

The code for this article can be found here:

https://github.com/Crockwell-Solutions/cdk-eventbridge-scheduler

This CDK / Typescript project gets you up and running with all the resources you need to observe your schedules in DynamoDB. Also included is a seeding function to demonstrate the end-to-end workflow.


What Are EventBridge Scheduled Events?

Amazon EventBridge offers two types of schedules:

  1. Recurring (cron-like): These have been around for a while, letting you define periodic event triggers. Previously these were part of CloudWatch. AWS rightly moved this all under the EventBridge services.
  2. One-off schedules: Introduced in 2022, these fire once and optionally delete themselves. They were the missing piece in event-driven systems.

Why One-off Schedules?

One-off schedules are perfect for situations requiring time-sensitive actions:

  • User notifications: Trigger reminders at specific times based on user activity.
  • Time-bound processes: Schedule tasks like price changes or temporary discounts to occur at a predefined time.
  • Temporary API Keys: Generate API keys with a time-bound expiration, using a one-off schedule to trigger their deactivation at the exact expiration time.

For the purposes of this blog, I'm specifically referring to these one-off schedules.


The Problem with Tracking Scheduled Events

EventBridge Scheduler allows you to manage thousands of future events, but it is not easy to get an overview of these events: To know what is planned to fire, or even what has fired in the past. There are some options.

  • ListSchedules API Limitation: The ListSchedules API returns a maximum of 100 schedules per call. If you’re dealing with millions of events, this becomes impractical. There is also no out-of-the-box way to query these by firing time. You could incorporate the firing time in the schedule name allowing partial filtering using NamePrefix. However, renaming schedules isn’t possible, so updating firing times means deleting and recreating schedules, adding unnecessary complexity for observability. Not ideal.
  • Custom Solutions: You could just write back event metadata to the event origin. For example, you might want to schedule events for your customers based on their last activity in your platform. Say, schedule an event for 24 hours after they added an item to their basket to send them a reminder. You could just write back information about the scheduled event to the customer record, but what if you don’t want to keep updating customer records. You want to manage this schedule independently?

A completely different approach could be to DynamoDB TTL with Streams instead of EventBridge Scheduled events. You can configure your DynamoDB items with a Time-To-Live (TTL) attribute as the event firing time, allowing you to:

  • Store schedule metadata in DynamoDB.
  • Trigger downstream workflows using DynamoDB Streams when TTLs expire.

However, TTL expiration isn’t precise enough for some strict timing requirements. AWS documentation states:

“Items with valid, expired TTL attributes may be deleted by the system at any time, typically within a few days of their expiration.”

In my experience, it performs much better than that, but the point being, you can’t rely on the firing time.


A Robust Solution: CloudTrail and DynamoDB

For a scalable and reliable way to track schedules, we can combine CloudTrail and DynamoDB:

  1. Monitor Scheduler Events with CloudTrail

    Enable CloudTrail logging for your AWS environment. It will capture events such as:

    • Schedule creation
    • Updates (e.g. firing time changes)
    • Deletion
  2. Capture Events in a DynamoDB Table

    Use EventBridge Rules to filter CloudTrail logs and write relevant events into a DynamoDB table.


Benefits of This Approach

  • Comprehensive Tracking: Easily query schedules and their firing times, even for millions of events.
  • Historical Data: Maintain a record of schedules, even after they’ve fired and have deleted themselves in EventBridge.
  • Resilience: With some further work, you could also store the payloads of the EventBridge Scheduled Events which could allow you to build a disaster recovery system in the event that AWS were to lose your Schedules!

The Architecture

At the core of this architecture is CloudTrail. We leverage monitoring of the EventBridge Schedule API calls:

EventBridge Scheduled Events Architecture

Independently to the core logic of the schedule creation, our service does the following steps:

  1. CloudTrail picks up the EventBridge API events and they are published on the main Event Bus
  2. An EventBridge rule filters for these events and pushes them to an SQS queue
  3. The queue will batch the messages and send them to the ScheduleMonitorFunction. This Lambda will process the messages and create/update relevant records in DynamoDb

A queue is used during Step 2 because CloudTrail will not guarantee the order of events that are delivered. We need to ensure that changes are processed in order, so SQS will re-batch the messages, which are then sorted by the eventTime as they are processed in the ScheduleMonitorFunction Lambda Function.

To see how this works, deploy the demo repo and run the SeedSchedulesFunction. This creates three scheduled events, waits a few seconds then modifies one of them and deleted one of them. This demonstrate the functionality of the solution.

aws lambda invoke --function-name SeedSchedulesFunction outfile.txt
Enter fullscreen mode Exit fullscreen mode

EventBridge Scheduled Events

Event metadata is stored in DynamoDB using the following structure:

  • Primary Key: PK = SCHEDULEGROUP#SCHEDULENAME
  • Global Secondary Index (GSI): GSI1 = groupName, fireTime

This design lets you query events by group and firing time, enabling efficient lookups for upcoming schedules within specific time windows:

DynamoDB Query

Items are stored with a Time to Live (TTL) so they will expire and be removed from DynamoDB 30 days after the firing time.


Limitations and Considerations

  • Fired Events: This solution won’t explicitly track fired events, but since AWS guarantees “at least once delivery,” it’s safe to assume fired schedules have actually been fired.
  • Storage Costs: DynamoDB storage and CloudTrail logs will incur additional costs, so optimise your queries and retention policies.

Next Steps

This pattern is extensible for further uses. Options that could be considered include:

  • Analytics: Push data to Redshift, S3, or Athena for advanced querying and analytics.
  • Payload Recovery: Store payloads of scheduled events for disaster recovery or debugging.

About Me

I’m Ian, an AWS Serverless Specialist and AWS Certified Cloud Architect based in the UK. I work as an independent consultant, having worked across multiple sectors, with a passion for Aviation.

Let's connect on linkedIn, or find out more about my work at Crockwell Solutions

Top comments (0)