DEV Community

Cover image for Schedule on Demand: Building Self-Destructing Dynamically Scheduled Lambda Triggers in AWS
Abhiyan Khanal for AWS Community Builders

Posted on • Edited on

Schedule on Demand: Building Self-Destructing Dynamically Scheduled Lambda Triggers in AWS

Cover image source: Introducing Amazon EventBridge Scheduler - AWS Blog

Ever needed to run a one-time AWS task at the exact right moment? Whether it’s deleting an S3 file after 5 minutes, ending a promo code exactly on time, or shutting off a trial at the last second, traditional schedulers often let you down.

In this post, you’ll see how to use AWS EventBridge Scheduler to:

  1. Create one-off schedules on demand
  2. Auto-delete schedules when they run
  3. Hit precise timing without wasted polls or delays

Let’s make scheduling simple and exact!

Use Cases

Dynamic, self-managing schedulers would be valuable in several scenarios:

  1. Custom S3 Object Lifecycle Management

    Set flexible, per-object expiration times beyond S3’s standard lifecycle policies:

    • Object A needs deletion after 5 minutes
    • Object B requires retention for 1 day
    • Object C must be archived after 3 hours but deleted after 30 days All managed through event-driven, temporary schedulers that clean up after themselves.
  2. Limited-Time Promotional Offers

    Schedule promotional code expirations or pricing changes that clean up after themselves.

  3. Trial Period Management

    Automatically schedule feature deactivation for users after trial periods.

Traditional Approaches & Limitations

DynamoDB TTL

  • Unpredictable timing: TTL deletions can take up to 48 hours with no guarantees on timing
  • Limited precision: Cannot schedule with second-level or sub-minute accuracy
  • Additional costs: You pay for extra storage, Streams, and Lambda invocations

Single Recurring Cron Job

  • Inefficient polling and invocations: Executes on a fixed interval regardless of actual workload. Lambda runs even when nothing is due.
  • Concurrency & SPOF: Potential throttling or “thundering-herd” when many jobs fire at once; single rule is a single point of failure

Dynamic EventBridge Scheduler Approach

  • Create on demand: Use the AWS SDK to spin up a one-off schedule via CreateScheduleCommand
  • Self-destruct: Configure ActionAfterCompletion: "DELETE" so the schedule deletes itself after running
  • Built-in robustness: Add retry policies and Dead-Letter Queues for failure handling
  • Key benefits:
    • Millisecond-level precision—no more TTL delays
    • Zero polling overhead or wasted executions
    • Fully managed scheduling infrastructure—no maintenance
    • Soft limit of 10 million schedules per account

Sample TypeScript Code

import {
  EventBridgeSchedulerClient,
  CreateScheduleCommand,
  DeleteScheduleCommand,
} from "@aws-sdk/client-scheduler";

const schedulerClient = new EventBridgeSchedulerClient({ region: "us-east-1" });

/**
 * Formats a Date into `YYYY-MM-DDThh:mm:ss`
 */
function formatTimestamp(date: Date): string {
  const pad = (n: number) => n.toString().padStart(2, "0");
  const year    = date.getUTCFullYear();
  const month   = pad(date.getUTCMonth() + 1);
  const day     = pad(date.getUTCDate());
  const hours   = pad(date.getUTCHours());
  const minutes = pad(date.getUTCMinutes());
  const seconds = pad(date.getUTCSeconds());
  return `${year}-${month}-${day}T${hours}:${minutes}:${seconds}`;
}

/**
 * Creates a one-off schedule to delete an S3 object at the given timestamp.
 */
async function scheduleS3Expiration(
  scheduleName: string,
  bucketName: string,
  objectKey: string,
  expirationDate: Date
) {
  const formattedTime     = formatTimestamp(expirationDate);
  const scheduleExpression = `at(${formattedTime})`;

  const params = {
    Name: scheduleName,
    GroupName: "s3-expirations",
    ScheduleExpression: scheduleExpression,
    FlexibleTimeWindow: { Mode: "OFF" },
    Target: {
      Arn: `arn:aws:lambda:us-east-1:123456789012:function:CleanupS3Object`,
      RoleArn: `arn:aws:iam::<AWS_ACCOUNT_ID>:role/<YOUR_ROLE_FOR_SCHEDULER>`,
      Input: JSON.stringify({
        bucket: bucketName,
        key: objectKey,
      }),
      RetryPolicy: {
        MaximumAttempts: 25,
        MaximumEventAgeInSeconds: 86400,
      },
      DeadLetterConfig: {
        Arn: `arn:aws:sqs:us-east-1:<AWS_ACCOUNT_ID>:<YOUR_DLQ_NAME>`,
      },
    },
    ActionAfterCompletion: "DELETE",    // self-destruct
  };

  try {
    await schedulerClient.send(new CreateScheduleCommand(params));
    console.log(
      `✅ Scheduled deletion of s3://${bucketName}/${objectKey} at: ${formattedTime}`
    );
  } catch (err) {
    console.error("❌ Error creating schedule:", err);
    throw err;
  }
}

/**
 * (Optional) Manually delete a schedule before it fires
 */
async function deleteSchedule(scheduleName: string) {
  try {
    await schedulerClient.send(
      new DeleteScheduleCommand({
        Name: scheduleName,
        GroupName: "s3-expirations",
      })
    );
    console.log(`🗑️Deleted schedule ${scheduleName}`);
  } catch (err) {
    console.error("❌ Error deleting schedule:", err);
    throw err;
  }
}

// Example usage
(async () => {
  const bucket = "my-app-uploads";
  const key    = "temp-files/file-12345.txt";
  const deleteAt = new Date(Date.now() + 5 * 60 * 1000);
  const scheduleName = `delete-${key.replace(/\W/g, "-")}-${deleteAt.getTime()}`;

  await scheduleS3Expiration(scheduleName, bucket, key, deleteAt);

  // If you need to cancel early:
  // await deleteSchedule(scheduleName);
})();
Enter fullscreen mode Exit fullscreen mode

Why This Approach Is Robust

  1. Centralized Configuration with CreateScheduleCommand

    A single API call defines schedule expression, target Lambda, retry policy, DLQ, and cleanup behavior.

  2. Logical Grouping via GroupName

    Use GroupName to tag related schedules (e.g., s3-expirations, promo-offers) for easy filtering and monitoring in the console or via API.

  3. Built-in Retry Capabilities

    Retry up to 3 times on transient failures, protecting against network blips or downstream service hiccups.

  4. Dead-Letter Queue (DLQ)

    Failed events after all retries are sent to an SNS topic or SQS queue:

    • Provides visibility into failures
    • Enables manual fixes or automated replay
    • Prevents silent data loss
  5. Self-Cleaning Architecture

    ActionAfterCompletion: "DELETE" ensures schedules never accumulate, keeping your account tidy and within quotas.

  6. Massive Scalability
    Support for up to 10 million schedules per AWS account (soft limit, can be raised), making it ideal for applications with thousands—or millions—of independent timers.


This delivers minute-level control over object retention, outperforming standard S3 lifecycle rules for any application with diverse, per-object requirements.

Top comments (0)