When Should You Implement Throttling in Your Code?
For big projects, it’s usually best to use tools like Cloudflare Rate Limiting or HAProxy. These are powerful, reliable, and take care of the heavy lifting for you.
But for smaller projects—or if you want to learn how things work—you can create your own rate limiter right in your code. Why?
- It’s Simple: You’ll build something straightforward that’s easy to understand.
- It’s Budget-Friendly: No extra costs beyond hosting your server.
- It Works for Small Projects: As long as traffic is low, it keeps things fast and efficient.
- It’s Reusable: You can copy it into other projects without setting up new tools or services.
What You Will Learn
By the end of this guide, you’ll know how to build a basic throttler in TypeScript to protect your APIs from being overwhelmed. Here’s what we’ll cover:
- Configurable Time Limits: Each blocked attempt increases the lockout duration to prevent abuse.
- Request Caps: Set a maximum number of allowed requests. This is especially useful for APIs that involve paid services, like OpenAI.
- In-Memory Storage: A simple solution that works without external tools like Redis—ideal for small projects or prototypes.
- Per-User Limits: Track requests on a per-user basis using their IPv4 address. We’ll leverage SvelteKit to easily retrieve the client IP with its built-in method.
This guide is designed to be a practical starting point, perfect for developers who want to learn the basics without unnecessary complexity. But it is not production-ready.
Before starting, I want to give the right credits to Lucia's Rate Limiting section.
Throttler Implementation
Let’s define the Throttler
class:
export class Throttler {
private storage = new Map<string, ThrottlingCounter>();
constructor(private timeoutSeconds: number[]) {}
}
The Throttler
constructor accepts a list of timeout durations (timeoutSeconds
). Each time a user is blocked, the duration increases progressively based on this list. Eventually, when the final timeout is reached, you could even trigger a callback to permanently ban the user’s IP—though that’s beyond the scope of this guide.
Here’s an example of creating a throttler instance that blocks users for increasing intervals:
const throttler = new Throttler([1, 2, 4, 8, 16]);
This instance will block users the first time for one second. The second time for two, and so on.
We use a Map to store IP addresses and their corresponding data. A Map
is ideal because it handles frequent additions and deletions efficiently.
Pro Tip: Use a Map for dynamic data that changes frequently. For static, unchanging data, an object is better. (Rabbit hole 1)
When your endpoint receives a request, it extracts the user's IP address and consults the Throttler
to determine whether the request should be allowed.
How it Works
-
Case A: New or Inactive User
If the IP is not found in theThrottler
, it’s either the user’s first request or they’ve been inactive long enough. In this case:- Allow the action.
- Track the user by storing their IP with an initial timeout.
-
Case B: Active User
If the IP is found, it means the user has made previous requests. Here:- Check if the required wait time (based on the
timeoutSeconds
array) has passed since their last block. - If enough time has passed:
- Update the timestamp.
- Increment the timeout index (capped to the last index to prevent overflow).
- If not, deny the request.
- Check if the required wait time (based on the
In this latter case, we need to check if enough time is passed since last block. We know which of the timeoutSeconds
we should refer thank to an index
. If not, simply bounce back. Otherwise update the timestamp.
export class Throttler {
// ...
public consume(key: string): boolean {
const counter = this.storage.get(key) ?? null;
const now = Date.now();
// Case A
if (counter === null) {
// At next request, will be found.
// The index 0 of [1, 2, 4, 8, 16] returns 1.
// That's the amount of seconds it will have to wait.
this.storage.set(key, {
index: 0,
updatedAt: now
});
return true; // allowed
}
// Case B
const timeoutMs = this.timeoutSeconds[counter.index] * 1000;
const allowed = now - counter.updatedAt >= timeoutMs;
if (!allowed) {
return false; // denied
}
// Allow the call, but increment timeout for following requests.
counter.updatedAt = now;
counter.index = Math.min(counter.index + 1, this.timeoutSeconds.length - 1);
this.storage.set(key, counter);
return true; // allowed
}
}
When updating the index, it caps to the last index of timeoutSeconds
. Without it, counter.index + 1
would overflow it and next this.timeoutSeconds[counter.index]
access would result in a runtime error.
Endpoint example
This example shows how to use the Throttler
to limit how often a user can call your API. If the user makes too many requests, they’ll get an error instead of running the main logic.
const throttler = new Throttler([1, 2, 4, 8, 16, 30, 60, 300]);
export async function GET({ getClientAddress }) {
const IP = getClientAddress();
if (!throttler.consume(IP)) {
throw error(429, { message: 'Too Many Requests' });
}
// Read from DB, call OpenAI - do the thing.
return new Response(null, { status: 200 });
}
Note for Authentication
When using rate limiting with login systems, you might face this issue:
- A user logs in, triggering the
Throttler
to associate a timeout with their IP. - The user logs out or their session ends (e.g., logs out immediately, cookie expires with session and browsers crashes, etc.).
- When they try to log in again shortly after, the
Throttler
may still block them, returning a429 Too Many Requests
error.
To prevent this, use the user’s unique userID
instead of their IP for rate limiting. Also, you must reset the throttler state after a successful login to avoid unnecessary blocks.
Add a reset
method to the Throttler
class:
export class Throttler {
// ...
public reset(key: string): void {
this.storage.delete(key);
}
}
And use it after a successful login:
const user = db.get(email);
if (!throttler.consume(user.ID)) {
throw error(429);
}
const validPassword = verifyPassword(user.password, providedPassword);
if (!validPassword) {
throw error(401);
}
throttler.reset(user.id); // Clear throttling for the user
Managing Stale IP Records with Periodic Cleanup
As your throttler tracks IPs and rate limits, it's important to think about how and when to remove IP records that are no longer needed. Without a cleanup mechanism, your throttler will continue to store records in memory, potentially leading to performance issues over time as the data grows.
To prevent this, you can implement a cleanup function that periodically removes old records after a certain period of inactivity. Here's an example of how to add a simple cleanup method to remove stale entries from the throttler.
export class Throttler {
// ...
public cleanup(): void {
const now = Date.now()
// Capture the keys first to avoid issues during iteration (we use .delete)
const keys = Array.from(this.storage.keys())
for (const key of keys) {
const counter = this.storage.get(key)
if (!counter) {
// Skip if the counter is already deleted (handles concurrency issues)
return
}
// If the IP is at the first timeout, remove it from storage
if (counter.index == 0) {
this.storage.delete(key)
continue
}
// Otherwise, reduce the timeout index and update the timestamp
counter.index -= 1
counter.updatedAt = now
this.storage.set(key, counter)
}
}
}
A very simple way (but probably not the best) way to schedule the cleanup is with setInterval
:
const throttler = new Throttler([1, 2, 4, 8, 16, 30, 60, 300])
const oneMinute = 60_000
setInterval(() => throttler.cleanup(), oneMinute)
This cleanup mechanism helps ensure that your throttler doesn't hold onto old records indefinitely, keeping your application efficient. While this approach is simple and easy to implement, it may need further refinement for more complex use cases (e.g., using more advanced scheduling or handling high concurrency).
With periodic cleanup, you prevent memory bloat and ensure that users who haven’t attempted to make requests in a while are no longer tracked - this is a first step toward making your rate-limiting system both scalable and resource-efficient.
-
If you’re feeling adventurous, you may be interested into reading how properties are allocared and how it changes. Also, why not, about VMs optmizations like inline caches, which is particularly favored by monomorphism. Enjoy. ↩
Top comments (0)