When you’re running a plain node:http server, adding rate limiting often feels heavier than it needs to be. Most solutions assume Express-style middleware or an API gateway sitting in front of your app.
This approach keeps things closer to the metal, no framework assumptions, no magic lifecycle hooks—just functions that decide whether a request is allowed to continue.
Design goals
- Framework-agnostic: works directly with http.createServer.
- Redis-backed: safe for multiple processes or servers.
The result is something you can read end-to-end without needing to know a specific ecosystem.
The Base Request Handler
Rate limiters follow the same RequestHandler type used elsewhere in the server. A handler inspects the request, possibly writes a response, and either ends the request or lets it continue.
// request-handler.ts
import http from "node:http";
export type RequestHandler<ReturnType = void> = (
incommingMessage: http.IncomingMessage,
serverResponse: http.ServerResponse,
) => ReturnType;
Rate limiter strategies
Fixed window counter
The fixed window strategy is straightforward:
- Each client gets a Redis key.
- Requests increment a counter.
- The key expires after windowSize seconds.
- Once the limit is reached, requests are rejected until the window resets.
// redis-rate-limiter.ts
import http from "node:http";
import { createClient, createClientPool } from "redis";
import type { RequestHandler } from "./request-handler";
export function fixedWindowCounter(
client: ReturnType<typeof createClient> | ReturnType<typeof createClientPool>,
config: {
limit: number;
windowSize: number;
baseKey: string;
getClientIdFromIncommingRequest: (
incommingMessage: http.IncomingMessage,
) => string;
},
): RequestHandler<Promise<void>> {
return async (incommingMessage, serverResponse) => {
const key = `${config.baseKey}:${config.getClientIdFromIncommingRequest(
incommingMessage,
)}`;
const currentValue = parseInt((await client.get(key)) ?? "");
const count = Number.isNaN(currentValue) ? 0 : currentValue;
if (count >= config.limit) {
const ttl = await client.ttl(key);
const retryDate =
ttl > 0
? new Date(Date.now() + ttl * 1000).toUTCString()
: new Date(Date.now() + config.windowSize * 1000).toUTCString();
serverResponse.statusCode = 429;
serverResponse.setHeader("Retry-After", retryDate);
serverResponse.setHeader("Content-Type", "text/plain");
serverResponse.end("Too Many Requests");
return;
}
const transaction = client.multi();
transaction.incr(key);
transaction.expire(key, config.windowSize, "NX");
await transaction.exec();
};
}
When this works well
- Low to moderate traffic.
- Simple abuse prevention.
- Internal or trusted clients.
Known limitations
- Traffic can spike at window boundaries.
- Enforcement is coarse-grained.
If those are acceptable, fixed windows are hard to beat for simplicity.
Sliding window counter
Sliding windows trade simplicity for smoother enforcement. Instead of one counter, requests are grouped into smaller time buckets.
Each request:
- Increments a sub-window counter.
- Checks the sum of all sub-windows in the main window.
- Expires buckets automatically over time.
// redis-rate-limiter.ts
import http from "node:http";
import { createClient, createClientPool } from "redis";
import * as Utils from "../utils";
import type { RequestHandler } from "./request-handler";
export function slidingWindowCounter(
client: ReturnType<typeof createClient> | ReturnType<typeof createClientPool>,
config: {
limit: number;
windowSize: number;
subWindowSize: number;
baseKey: string;
getClientIdFromIncommingRequest: (
incommingMessage: http.IncomingMessage,
) => string;
},
): RequestHandler<Promise<void>> {
return async (incommingMessage, serverResponse) => {
const key = `${config.baseKey}:${config.getClientIdFromIncommingRequest(
incommingMessage,
)}`;
const values = Object.values(await client.hGetAll(key))
.map(parseInt)
.map((v) => (Number.isNaN(v) ? 0 : v));
const total = Utils.getNumberArraySum(values);
if (total >= config.limit) {
const retryDate = new Date(
Date.now() + config.subWindowSize * 1000,
).toUTCString();
serverResponse.statusCode = 429;
serverResponse.setHeader("Retry-After", retryDate);
serverResponse.setHeader("Content-Type", "text/plain");
serverResponse.end("Too Many Requests");
return;
}
const currentSubWindow =
Math.floor(Date.now() / (config.subWindowSize * 1000)) *
(config.subWindowSize * 1000);
const transaction = client.multi();
transaction.hIncrBy(key, currentSubWindow.toString(), 1);
transaction.hExpire(
key,
currentSubWindow.toString(),
config.windowSize,
"NX",
);
await transaction.exec();
};
}
Trade-offs
Pros
- Smoother request distribution.
- No hard reset boundaries.
Cons
- More Redis operations.
- Slightly more complexity.
- Still intentionally practical, not theoretical.
Using a rate limiter in a server
Here’s a minimal example wiring a rate limiter into a Node HTTP server.
// server.ts
import http from "node:http";
import { createClient } from "redis";
import { fixedWindowCounter } from "./rate-limiters";
const redis = createClient();
await redis.connect();
const rateLimiter = fixedWindowCounter(redis, {
limit: 100,
windowSize: 60,
baseKey: "rate-limit",
getClientIdFromIncommingRequest: (req) =>
req.socket.remoteAddress ?? "unknown",
});
const server = http.createServer();
server.on("request", async (req, res) => {
await rateLimiter(req, res);
if (res.writableEnded) {
return;
}
res.statusCode = 200;
res.end("OK");
});
server.listen(3000);
If the limiter decides the request shouldn’t proceed, it ends the response and nothing else runs.
Closing thoughts
This setup isn’t meant to replace dedicated gateways or edge rate limiting. It’s a pragmatic/educative solution.
Everything happens in plain Node, Redis does the counting, and the behavior is easy to reason about by just reading the code.
If that’s the kind of setup you prefer, this approach fits nicely without dragging in a framework.
Top comments (0)