DEV Community

Mary Olowu
Mary Olowu

Posted on

Exponential vs Linear: How to Tell If Your Event-Driven Trigger Is Looping

Exponential vs Linear: How to Tell If Your Event-Driven Trigger Is Looping

The Core Idea

When you're building rate limits for event-driven triggers, you face a fundamental problem: how do you set a threshold that catches loops without blocking legitimate high-volume workloads?

The answer is that loops and legitimate traffic have fundamentally different growth characteristics:

Legitimate triggers scale linearly with user actions.

  • 1 user creates 1 order → 1 trigger execution
  • 50 users create 50 orders per minute → 50 trigger executions per minute
  • The ratio is always 1:1. Trigger executions track user actions.

Recursive loops scale exponentially from a single user action.

  • 1 user creates 1 record → trigger fires → function creates another record → trigger fires again
  • After 10 seconds: 100+ executions
  • After 60 seconds: 700+ executions
  • All from 1 user action. The trigger is its own input.

This isn't a subtle distinction. It's the difference between a line and an exponential curve. And it means your rate limit doesn't need to be clever — it just needs to sit in the massive gap between the two curves.

Why This Matters for Rate Limit Design

A rate limit of 100 executions per 60 seconds:

  • Never blocks legitimate traffic. Even a high-volume e-commerce system processing 80 orders per minute sits under the limit.
  • Always catches loops. A recursive loop hits 100 executions in under 8 seconds.

The gap between "highest legitimate volume" and "slowest possible loop" is enormous. You don't need machine learning or anomaly detection. You just need basic arithmetic.

The Math

A recursive trigger loop doubles (at minimum) with each iteration. If one trigger execution creates one record, and that record fires one trigger:

Time Executions (cumulative)
Iteration 1 1
Iteration 2 2
Iteration 3 4
Iteration 10 1,024
Iteration 16 65,536

Even with network latency and compute overhead slowing each iteration to 100ms, you hit 100 executions in ~7 seconds. With faster execution (10ms per iteration), you hit 100 in under a second.

Meanwhile, the highest legitimate trigger volume we've seen across our platform is ~80 executions per minute per trigger — and that's a busy e-commerce workspace during a flash sale.

The gap is 10x-100x. Your rate limit has a lot of room.

What About Burst Traffic?

The natural objection: "What about a bulk import? A user imports 500 records at once, and each fires a trigger."

This is a valid concern but a different problem:

  1. Bulk imports via API publish a single aggregate event (records_bulk_created), not 500 individual events. Event-driven triggers don't match on the aggregate event, so they don't fire at all.

  2. Batch operations from compute functions do publish individual events. But even 500 trigger executions from a batch operation is a one-time burst, not a sustained loop. If your rate limit window is 60 seconds, the burst registers once. A loop registers continuously.

  3. If batch-triggered functions need to fire triggers, the rate limit should be configurable per-trigger. Default 100/60s works for 99% of cases. The 1% that needs more can raise it.

Implementing the Test

The simplest implementation is a Redis counter with a TTL:

async function isWithinRateLimit(triggerId: string): Promise<boolean> {
  const key = `trigger_rate:${triggerId}`;
  const count = await redis.incr(key);
  if (count === 1) await redis.expire(key, 60);
  return count <= 100;
}
Enter fullscreen mode Exit fullscreen mode

That's it. Six lines. The INCR is atomic (no race conditions across instances), the EXPIRE handles cleanup, and the threshold separates linear from exponential with a 10x margin.

Beyond Rate Limiting

Rate limiting is the safety net, not the whole solution. For a complete defense:

  1. Block obvious loops at configuration time. When a user creates a trigger on record_created for collection X, and the function calls api.createRecord('X', ...), reject it with a clear error. This is prevention, not detection.

  2. Track causality at runtime. Propagate a sourceTriggerId through event chains so you can identify self-loops without waiting for the rate limit to trip. The user gets a "recursive loop detected" message instead of a vague "rate limit exceeded."

  3. Rate limit as the catch-all. For cross-trigger chains (A→B→A) and exotic patterns that bypass the first two layers.

We wrote a detailed post about implementing all three layers: How We Stopped Recursive Trigger Loops From Melting Our Compute Fleet.

The Takeaway

If your platform has event-driven triggers, ask yourself: can a trigger's output become its own input? If yes, you need loop protection. And the simplest, most reliable loop protection is a rate limit set in the gap between linear user-driven traffic and exponential recursive behavior.

That gap is enormous. Use it.


Building event-driven infrastructure? We'd love to hear about your trigger architecture challenges. Reach out on [Twitter/X] @centraliio or drop a comment.

Top comments (0)