Modern systems often rely on APIs or services that enforce strict limits on how frequently and how many requests can be made — whether due to scalability, fairness, resource protection or your subscription plan.
You might be working with:
- An internal rate-limited service
- An async job runner that must avoid resource contention
- A background task that syncs data without overwhelming a target system
In these cases, trying to "pace" requests with custom code typically leads to:
- Fragile setTimeout() logic
- Inconsistent behavior under load
- Queues that are difficult to debug or scale
That’s where Bottleneck comes in — a lightweight, production-ready scheduling library for Node.js that brings powerful, composable rate-limiting primitives to your async workflows.
🧩 The 3 Core Flags and What They Solve
1. ⏱️ minTime
: Space Out Requests (RPS)
Problem:
“My API only allows 10 requests per second.”
Solution:
const limiter = new Bottleneck({ minTime: 100 }); // 100ms = 10 RPS
Usage
const fetchData = limiter.wrap(async (input) => {
return fetch(`https://example.com/api?q=${input}`);
});
// Queue up 200 requests
const inputs = Array.from({ length: 200 }, (_, i) => ({ id: i }));
Promise.all(
inputs.map((input) => fetchData(input))
).then((results) => {
console.log("All requests completed!");
});
Ensures 100ms spacing between job starts — giving you consistent pacing.
2. 🔄 maxConcurrent
: Limit Parallel Jobs
Problem:
“I don’t want more than 3 jobs running at once.”
Solution:
const limiter = new Bottleneck({ maxConcurrent: 3 });
Usage
const processFile = limiter.wrap(async (file) => {
return compress(file);
});
No matter how many jobs are queued, only 3 run at the same time.
3. 🪣 reservoir
: Enforce Request Quotas
Problem:
“The API allows only 1000 requests per hour.”
Solution:
const limiter = new Bottleneck({
reservoir: 1000,
reservoirRefreshInterval: 60 * 60 * 1000, // 1 hour
reservoirRefreshAmount: 1000,
});
Usage
const sendEmail = limiter.wrap(async (email) => {
return smtpClient.send(email);
});
Prevents more than 1000 jobs in a given hour. When the bucket is empty, jobs wait for refill.
🔁 How They Work Together
Bottleneck applies all constraints together:
-
minTime
→ controls how fast jobs start -
maxConcurrent
→ limits how many run at once -
reservoir
→ caps total jobs over time
✅ Example
Here is a Sample code to test with Bottleneck, Just change the delay timer.
“I want to send 10 requests per second, at most 3 in parallel, and not exceed 1000 requests per hour.”
Testing
import Bottleneck from "bottleneck";
const limiter = new Bottleneck({
minTime: 100, // 10 requests per second
maxConcurrent: 3,
reservoir: 1000,
reservoirRefreshInterval: 3600000, // 1 hour
reservoirRefreshAmount: 1000,
});
function delay(ms) {
return new Promise((resolve) => setTimeout(resolve, ms));
}
const callAPI = limiter.wrap(async (payload) => {
await delay(500);
console.log("finished request for payload:", payload.id);
});
// Queue 100 requests
const inputs = Array.from({ length: 100 }, (_, i) => ({ id: i }));
Promise.all(inputs.map((payload) => callAPI(payload))).then((results) => {
console.log("All requests completed!");
});
🧪 Quick Flag Guide
Goal | Use |
---|---|
Limit requests per second | minTime |
Avoid parallel overload | maxConcurrent |
Respect quota (e.g. 1000/hr) |
reservoir + refresh |
Combine pacing & concurrency |
minTime + maxConcurrent
|
Full traffic shaping | All 3 |
🧘 Final Thoughts
The next time you hit a 429 Too Many Requests
or crash a service with async overload — don’t use hacks. Use Bottleneck.
✅ Declarative.
✅ Predictable.
✅ Powerful.
Top comments (0)