DEV Community

zwx00
zwx00

Posted on

Replay failed stripe events via webhook

Sometimes webhook events fail to deliver, and you need to replay them to ensure your system processes all important events. Here's a handy one-liner using the Stripe CLI to resend failed subscription cancellation events:

stripe events list \
  --type=customer.subscription.deleted \
  --delivery-success=false \
  --live \
  --limit 150 \
  | jq ".data[].id" \
  | xargs -n1 -t stripe events resend \
    --live \
    --webhook-endpoint=we_LALALALA
Enter fullscreen mode Exit fullscreen mode

Let's break down what this command does:

  1. stripe events list: Lists Stripe events

    • --type=customer.subscription.deleted: Filters for subscription cancellation events
    • --delivery-success=false: Only shows failed deliveries
    • --live: Uses live mode (not test mode)
    • --limit 150: Retrieves up to 150 events
  2. jq ".data[].id": Extracts just the event IDs from the JSON response

  3. xargs -n1 -t: Processes each event ID one at a time

    • -n1: Passes one argument per command
    • -t: Prints each command before executing it
  4. stripe events resend: Resends each event to your webhook endpoint

    • --live: Uses live mode
    • --webhook-endpoint=we_LALALALA: Specifies the webhook endpoint to use

Remember to replace we_LALALALA with your actual webhook endpoint ID.

This command is particularly useful when:

  • Your webhook endpoint was down
  • You had network issues
  • You're testing new webhook handling code
  • You need to backfill missed events

Make sure you have both the Stripe CLI and jq installed before running this command.

Happy webhooking! 🎣

Top comments (1)

Collapse
 
leggetter profile image
Phil Leggetter

Nice write-up. This approach works well when you're dealing with a small number of failed events and can afford to replay them manually through the dashboard. For early-stage products or lower traffic systems, that’s probably all you need.

At scale, though, you’ll likely want to shift to a queue-based architecture. That means introducing an ingestion layer to capture incoming events, pushing them onto a queue, and having workers consume and process them reliably. It gives you better control over throughput, retries, and visibility into failures using your queue and DLQ (dead letter queue).

If you're interested in what that architecture looks like, this post goes into more detail: hookdeck.com/blog/webhooks-at-scale