DEV Community

foxck016077
foxck016077

Posted on

Per-feature quota in Apify KeyValueStore — no DB, no cron, no drift

The reflex when you hear "quota" is to reach for a database plus a cron job. For a small, self-maintained Actor I care more about complexity: the heavier the system, the harder it is to keep stable for months.

For this project I went with KeyValueStore plus a month-key. The goals were clear:

  • Meter per feature, not just total invocations
  • Reset naturally each month — no scheduled job
  • Keep the logic in one place so it doesn't drift

The trick is in the key layout. Conceptually:

quota:{tenant}:{feature}:2026-05
Enter fullscreen mode Exit fullscreen mode

When a new month starts, you write to a new key. You don't wipe the old one. "Reset" becomes a naming switch, not a data migration. No month-end task can fail, and you don't have to think about timezone edges either.

Per-feature matters because feature cost varies wildly. If you only meter the total, heavy features crowd out light ones, and everyone ends up feeling like the limit is unfair.

The discipline in quota.py is small but strict:

  • Check the limit before doing the work
  • Only record after the operation actually succeeds
  • Keep the key format fixed — no variants
  • On failure, do not write a guess

"No drift" isn't an algorithmic property. It's a write-semantics property. The moment your counter writes are scattered across modules — each doing it slightly differently — you've already lost.

This design is not universal. It fits products with low-to-medium complexity, multi-tenant but with bounded concurrency. If you later outgrow it, swapping in a real metering pipeline is a clean migration, because the surface is small.

My one piece of advice: get the quota schema right in the first version. Adding it later almost always drags pricing, routing, and error semantics into the rewrite with it.

Related

If you're building SMB automation services, the n8n SMB Automation Pack ($29) pairs nicely with this pattern — quota, retries, and notifications all live in one operable flow.


Previous notes in this series:

Top comments (0)