DEV Community

Cover image for Rate Limiting Isn't Optional Here How to Actually Implement It in Node.js
Sandeep Bansod
Sandeep Bansod

Posted on • Originally published at stackdevlife.com

Rate Limiting Isn't Optional Here How to Actually Implement It in Node.js

If your API has no rate limiting, any client can send as many requests as it wants. A broken retry loop, a scraper, or a user who refreshes too fast all of it hits your server with no limit.

This guide shows you how to add rate limiting to a Node.js API properly from the basic setup to Redis-backed distributed limiting that works in production.


Why You Need Rate Limiting

Without rate limiting, your API is fully exposed to:

  • Retry loops that go infinite - a client bug keeps sending requests non-stop
  • Credential stuffing - bots trying thousands of username/password combinations
  • Web scrapers - pulling all your data in minutes
  • One user burning your third-party API quota - costing you money
  • Heavy users slowing things down for everyone else

Rate limiting puts a ceiling on how many requests a client can make in a given time window. Once they hit the limit, they get a 429 Too Many Requests response. Simple.

The Wrong Way: In-Memory Counters

The first thing most people try looks like this:

const requestCounts = {};

app.use((req, res, next) => {
  const ip = req.ip;
  requestCounts[ip] = (requestCounts[ip] || 0) + 1;

  if (requestCounts[ip] > 100) {
    return res.status(429).json({ error: 'Too many requests' });
  }

  next();
});
Enter fullscreen mode Exit fullscreen mode

This works on one server. But the moment you have two instances running behind a load balancer, each instance has its own counter. A client that's blocked on instance A just keeps hitting instance B. Your limit is effectively multiplied by the number of servers.

Also, every time your server restarts, all counters reset to zero.

Use in-memory for local development only. For production, you need a shared store more on that below.


Starting With express-rate-limit

express-rate-limit is the standard package for rate limiting in Express apps.

npm install express-rate-limit
Enter fullscreen mode Exit fullscreen mode

Basic setup:

import rateLimit from 'express-rate-limit';

const limiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15-minute window
  max: 100,                  // max requests per window, per IP
  standardHeaders: true,     // sends RateLimit-* headers to the client
  legacyHeaders: false,
  message: {
    error: 'Too many requests. Please try again later.',
  },
});

app.use('/api', limiter);
Enter fullscreen mode Exit fullscreen mode

This is a solid start. But the default store is still in-memory, and there are two common mistakes that silently break it in production.


Fix 1: Set trust proxy

If your app runs behind nginx, a cloud load balancer, or Cloudflare, then req.ip will show the proxy's internal IP address not the actual client IP.

That means every request looks like it's coming from the same address. Your rate limiter treats all users as one person.

Fix it with one line in Express:

app.set('trust proxy', 1); // trust the first proxy in the chain
Enter fullscreen mode Exit fullscreen mode

Check that it's working:

app.get('/debug/ip', (req, res) => {
  res.json({ ip: req.ip });
});
Enter fullscreen mode Exit fullscreen mode

If you see 127.0.0.1 on your production server, the setting isn't working yet.


Fix 2: Use user ID on authenticated routes

Limiting by IP address causes problems when many users share the same IP like a team working from one office network.

For routes where users are logged in, use their user ID instead:

const userLimiter = rateLimit({
  windowMs: 60 * 1000, // 1 minute
  max: 60,
  keyGenerator: (req) => {
    return req.user?.id ?? req.ip; // use user ID if available, fall back to IP
  },
});

app.use('/api/dashboard', authenticate, userLimiter);
Enter fullscreen mode Exit fullscreen mode

This way, one users heavy usage doesn't block everyone else on their network.


The Three Rate Limiting Algorithms

Before adding Redis, it helps to understand the three main approaches. They all do the same thing but behave differently at the edges.

Fixed Window

Time is split into fixed chunks — say, every 60 seconds. Each client gets 100 requests per chunk.

The problem: a client can use 100 requests at second 59, and another 100 at second 61. That's 200 requests in 2 seconds double the limit because the window reset right in between.

Sliding Window

Instead of resetting at fixed intervals, the window moves with each request. The check is always "how many requests in the last 60 seconds?"

This avoids the burst problem. There's no boundary to exploit. It's more accurate, but requires tracking timestamps for each request, not just a count.

Token Bucket

Each client has a bucket that holds tokens. Each request uses one token. Tokens refill at a steady rate (for example, 2 per second).

If a client hasn't made requests in a while, their tokens build up. This allows short bursts — a user who's been idle can fire off a few quick requests — while still keeping the long-term rate under control.

Most production APIs use token bucket or sliding window. Fixed window is simpler to implement but easier to game.


Switching to Redis (Production Setup)

For a multi-server setup, you need a central store that all instances can share. Redis is the standard choice.

npm install rate-limiter-flexible ioredis
Enter fullscreen mode Exit fullscreen mode

rate-limiter-flexible gives you full control over the algorithm and works with Redis out of the box.

Here a sliding window rate limiter backed by Redis:

import { RateLimiterRedis } from 'rate-limiter-flexible';
import Redis from 'ioredis';

const redisClient = new Redis({
  host: process.env.REDIS_HOST,
  port: Number(process.env.REDIS_PORT),
  enableOfflineQueue: false,
});

const rateLimiter = new RateLimiterRedis({
  storeClient: redisClient,
  keyPrefix: 'rl_api',
  points: 60,        // max requests
  duration: 60,      // per 60 seconds
  blockDuration: 60, // block the client for 60s after limit is hit
});

export async function rateLimitMiddleware(req, res, next) {
  const key = req.user?.id ?? req.ip;

  try {
    const result = await rateLimiter.consume(key);

    // Tell the client where they stand
    res.setHeader('X-RateLimit-Limit', 60);
    res.setHeader('X-RateLimit-Remaining', result.remainingPoints);
    res.setHeader('X-RateLimit-Reset', new Date(Date.now() + result.msBeforeNext).toISOString());

    next();
  } catch (rejRes) {
    if (rejRes instanceof Error) {
      // Redis is unreachable — let the request through rather than block everyone
      console.error('Rate limiter error:', rejRes.message);
      return next();
    }

    // Client hit the limit
    res.setHeader('Retry-After', Math.ceil(rejRes.msBeforeNext / 1000));
    res.status(429).json({
      error: 'Too many requests',
      retryAfter: Math.ceil(rejRes.msBeforeNext / 1000),
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

One decision you need to make: what happens when Redis is down? In the example above, the request is let through (fail open). That's fine for most APIs. For login or payment endpoints, you might prefer to block all traffic (fail closed) until Redis comes back.


Set Different Limits for Different Routes

Not every route deserves the same limit. A search endpoint that runs an expensive database query should be tighter than a simple status check.

Here a practical three-layer setup:

// Global: catches runaway clients before they reach any route
const globalLimiter = new RateLimiterRedis({
  storeClient: redisClient,
  keyPrefix: 'rl_global',
  points: 300,
  duration: 60,
});

// Per route: tighter limits on heavy endpoints
const searchLimiter = new RateLimiterRedis({
  storeClient: redisClient,
  keyPrefix: 'rl_search',
  points: 10,
  duration: 60,
});

// Auth: very tight — prevents brute force login attacks
const authLimiter = new RateLimiterRedis({
  storeClient: redisClient,
  keyPrefix: 'rl_auth',
  points: 5,
  duration: 300,       // 5 attempts per 5 minutes
  blockDuration: 900,  // blocked for 15 minutes after that
});

// Apply them
app.post('/api/auth/login', makeMiddleware(authLimiter), loginHandler);
app.get('/api/search', makeMiddleware(searchLimiter), searchHandler);
app.use('/api', makeMiddleware(globalLimiter));
Enter fullscreen mode Exit fullscreen mode

The auth limiter matters the most. Five login attempts per five minutes stops credential stuffing without locking out someone who mistyped their password once.


What to Send in the 429 Response

A 429 with no explanation leaves developers guessing. Give them what they need to handle it:

res.status(429).json({
  error: 'rate_limit_exceeded',
  message: 'You have sent too many requests. Please wait before trying again.',
  limit: 60,
  remaining: 0,
  resetAt: new Date(Date.now() + msBeforeNext).toISOString(),
  retryAfter: Math.ceil(msBeforeNext / 1000), // seconds to wait
});
Enter fullscreen mode Exit fullscreen mode

Also set the response headers:

HTTP/1.1 429 Too Many Requests
Retry-After: 47
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 2026-05-01T04:23:00.000Z
Enter fullscreen mode Exit fullscreen mode

Any client that reads Retry-After will wait the correct amount of time before retrying. That one header stops most retry hammering on its own.


Testing That It Actually Works

Don't ship rate limiting without testing it. Here a quick test with Supertest:

// test/rate-limit.test.js
import request from 'supertest';
import app from '../src/app.js';

describe('Rate limiting', () => {
  it('allows requests within the limit', async () => {
    for (let i = 0; i < 10; i++) {
      const res = await request(app).get('/api/search?q=test');
      expect(res.status).not.toBe(429);
    }
  });

  it('blocks requests that go over the limit', async () => {
    const requests = Array.from({ length: 15 }, () =>
      request(app).get('/api/search?q=test')
    );

    const responses = await Promise.all(requests);
    const blocked = responses.filter((r) => r.status === 429);

    expect(blocked.length).toBeGreaterThan(0);
  });

  it('returns a Retry-After header when blocked', async () => {
    const requests = Array.from({ length: 15 }, () =>
      request(app).get('/api/search?q=test')
    );

    const responses = await Promise.all(requests);
    const blocked = responses.find((r) => r.status === 429);

    expect(blocked?.headers['retry-after']).toBeDefined();
  });
});
Enter fullscreen mode Exit fullscreen mode

For load testing, use autocannon:

npx autocannon -c 50 -d 10 http://localhost:3000/api/search
Enter fullscreen mode Exit fullscreen mode

Run it and check how many 429 responses come back. If you see zero, your limit is set too high.


Quick Reference

What Use
Local development express-rate-limit with default memory store
Production (any multi-server setup) rate-limiter-flexible + Redis
Auth endpoints 5 attempts / 5 min, 15-min block
Search / heavy endpoints 10 requests / min
General API 60–100 requests / min
Key for anonymous users IP address
Key for logged-in users User ID

The Short Version

  • In-memory rate limiting breaks the moment you have more than one server
  • Set trust proxy correctly otherwise you're limiting the wrong IP
  • Use user ID as the rate limit key for authenticated routes
  • For production, use Redis as the shared store
  • Apply different limits to different routes auth tighter, general looser
  • Always send Retry-After in your 429 response
  • Test it under load before you deploy

Originally published at stackdevlife.com

Top comments (0)