DEV Community

Cover image for How to implement Rate limiting in Dot net.
Vishw Patel
Vishw Patel

Posted on

How to implement Rate limiting in Dot net.

In this article, we go through on topics listed as what is rate limiting, how to implement it in Dot net with simple approach, problem in simple approach, implement it as production ready solution.

We are going to address the following topics:

  1. Introduction to Rate Limiting
  2. Project Setup
  3. Register Redis Connection
  4. Create Middleware for Rate limiting
  5. Register middleware
  6. Run the Application and Check

Rate Limiting flowchart

Introduction to Rate Limiting

Rate limitation refers to a method that limits the quantity of incoming requests that a server receives within a given time frame. The purpose of implementing this is to guarantee the availability, stability, and security of the API. API providers can avoid misuse, lower the chance of server overload, and ensure consistent performance for all users by capping the rate of queries. These are a few important rate-limiting factors.

  1. Thresholds: Generally, rate restrictions are expressed as the maximum number of requests that are permitted in a given amount of time,
    e.g., "100 requests per minute" or "10,000 requests per day."

  2. Client Identification: Different identities, including IP addresses, API keys, user accounts, and other tokens, can be used to apply rate limits.

  3. Types of Rate Limiting:

  • User-based rate limiting: This technique applies limits to individual users or accounts so that no user is able to send more requests than is permitted.
  • IP-based rate limiting: By applying limits according to the IP address of the client, it is possible to keep a single IP from overloading the server.
  • Application-based rate limiting: This feature allows you to differentiate between various applications that use the API by enforcing limits depending on the API key.

When a client exceeds the allowed rate, the server typically responds with an HTTP status code, such as 429 Too Many Requests.

The response often includes information about the limit, the time until the limit resets, and guidance on how to retry the request.

Project Setup

Create a new ASP.NET Core MVC 7 project or Web Api project, using Visual Studio or VSCode or any text editor.

dotnet new mvc -n RateLimitingDemo
cd RateLimitingDemo
Enter fullscreen mode Exit fullscreen mode

After creating the project, required packages must be downloaded. So, Add following NuGet packages.

  • Microsoft.Extensions.Caching.StackExchangeRedis
  • StackExchange.Redis
dotnet add package Microsoft.Extensions.Caching.StackExchangeRedis
dotnet add package StackExchange.Redis
Enter fullscreen mode Exit fullscreen mode

Register Redis Connection

Before Going any further be sure your redis server is running.

To start the Redis server, simply run the redis-server command in your terminal. This will start the server with default configurations:

redis-server
Enter fullscreen mode Exit fullscreen mode

You should see the server starting up and a log of activities indicating that it is ready to accept connections.

Open another terminal window to interact with the Redis server using the Redis Command Line Interface (CLI):

redis-cli
Enter fullscreen mode Exit fullscreen mode

Now check server is responding or not by ping Command.

PING
Enter fullscreen mode Exit fullscreen mode

PING Returns PONG This command is useful for:

  1. Testing whether a connection is still alive.
  2. Verifying the server's ability to serve data - an error is returned when this isn't the case (e.g., during load from persistence or accessing a stale replica).
  3. Measuring latency.

If Redis is configured correctly and running fine then we can jump onto next steps:

Add connection string of Redis in appSettings.json file.

{
  "Redis": {
    "ConnectionString": "localhost:6379"
  }
}
Enter fullscreen mode Exit fullscreen mode

Now Configure Redis in Program.cs

//...

builder.Services.AddStackExchangeRedisCache(options =>
{
options.Configuration = builder.Configuration.GetSection("Redis"["ConnectionString"];});
//...
Enter fullscreen mode Exit fullscreen mode

Create Middleware for Rate limiting

Right Click on your Project RateLimitingDemo and click on Add and then Add Folder.Rename that newly created folder from NewFolder to Middlewares.

After renaming right click on middlewares folder and add New Item, set its name as RateLimitingMiddleware.

Every middleware in dotnet has Request Delegate as member and has to implement InvokeAsyncmethod.

public class RateLimitingMiddleware
{
 private readonly RequestDelegate _next;
 public RateLimitingMiddleware(RequestDelegate next)
 {
     _next = next;
 }
 public async Task InvokeAsync(HttpContext context)
 {
    // do something 
     await _next(context);
 }
Enter fullscreen mode Exit fullscreen mode

Don't forget to add _next() method , If _next() method is not there then request is not propagating to respective controller and that leads to request being stuck.

For this case we will do as follow in middleware:

  1. Get Ip of request.
  2. checks Requests ang get Request count from Redis of this Ip.
  3. if Request count reaches the max limit than we give response 429 too many requests.
  4. remove Requests than are older than our window size. i.e. time limit.
  5. Now, Add new Request in Redis.
using Microsoft.Extensions.Caching.Distributed;
using System.Text.Json;

public class RateLimitingMiddleware
{
 private readonly RequestDelegate _next;
 private readonly IDistributedCache _cache;
 private readonly int _maxRequests;
 private readonly TimeSpan _windowSize;

 public RateLimitingMiddleware(RequestDelegate next, IDistributedCache cache)
 {
     _next = next;
     _cache = cache;
     _maxRequests = 10; // Set your max requests
     _windowSize = TimeSpan.FromMinutes(1); // Set your sliding window size
 }

 public async Task InvokeAsync(HttpContext context)
 {
     var key = $"ratelimit:{context.Connection.RemoteIpAddress}";

     var currentRequestCount = await GetRequestCountAsync(key); 
//we will implement this method for Now Assume it gives request count
     if (currentRequestCount >= _maxRequests)
     {
         context.Response.StatusCode = StatusCodes.Status429TooManyRequests;
         await context.Response.WriteAsync("Rate limit exceeded. Try again later.");
         return;
     }
     await IncrementRequestCountAsync(key);
     //we will implement this method for Now Assume it increment request count
     await _next(context);
 }
Enter fullscreen mode Exit fullscreen mode

Method GetRequestCountAsync takes string key as parameter. this method then check Redis for Requests for this key. If gets any Request than returns request count else returns 0.

private async Task<int> GetRequestCountAsync(string key)
 {
     var cacheValue = await _cache.GetStringAsync(key);
     if (cacheValue == null)
     {
         return 0;
     }

     var requestLog = JsonSerializer.Deserialize<List<long>>(cacheValue);
     var currentTime = DateTimeOffset.UtcNow.ToUnixTimeSeconds();
     requestLog = requestLog.Where(timestamp => timestamp > currentTime - _windowSize.TotalSeconds).ToList();

     await _cache.SetStringAsync(key, JsonSerializer.Serialize(requestLog), new DistributedCacheEntryOptions
     {
         SlidingExpiration = _windowSize
     });

     return requestLog.Count;
 }

Enter fullscreen mode Exit fullscreen mode

Method IncrementRequestCountAsync takes string key as parameter.

this method calls GetRequestCountAsync method and checks if there is Requests then add new request in it. else create new Request list.

private async Task IncrementRequestCountAsync(string key)
 {
     var cacheValue = await _cache.GetStringAsync(key);
     var requestLog = cacheValue == null ? new List<long>() : JsonSerializer.Deserialize<List<long>>(cacheValue);

     requestLog.Add(DateTimeOffset.UtcNow.ToUnixTimeSeconds());

     await _cache.SetStringAsync(key, JsonSerializer.Serialize(requestLog), new DistributedCacheEntryOptions
     {
         SlidingExpiration = _windowSize
     });
 }
Enter fullscreen mode Exit fullscreen mode

Our RateLimitingMiddleware with all functions implemented looks like this.

using Microsoft.Extensions.Caching.Distributed;
using System.Text.Json;

public class RateLimitingMiddleware
{
 private readonly RequestDelegate _next;
 private readonly IDistributedCache _cache;
 private readonly int _maxRequests;
 private readonly TimeSpan _windowSize;

 public RateLimitingMiddleware(RequestDelegate next, IDistributedCache cache)
 {
     _next = next;
     _cache = cache;
     _maxRequests = 10; // Set your max requests
     _windowSize = TimeSpan.FromMinutes(1); // Set your sliding window size
 }

 public async Task InvokeAsync(HttpContext context)
 {
     var key = $"ratelimit:{context.Connection.RemoteIpAddress}";

     var currentRequestCount = await GetRequestCountAsync(key);

     if (currentRequestCount >= _maxRequests)
     {
         context.Response.StatusCode = StatusCodes.Status429TooManyRequests;
         await context.Response.WriteAsync("Rate limit exceeded. Try again later.");
         return;
     }

     await IncrementRequestCountAsync(key);
     await _next(context);
 }

 private async Task<int> GetRequestCountAsync(string key)
 {
     var cacheValue = await _cache.GetStringAsync(key);
     if (cacheValue == null)
     {
         return 0;
     }

     var requestLog = JsonSerializer.Deserialize<List<long>>(cacheValue);
     var currentTime = DateTimeOffset.UtcNow.ToUnixTimeSeconds();
     requestLog = requestLog.Where(timestamp => timestamp > currentTime - _windowSize.TotalSeconds).ToList();

     await _cache.SetStringAsync(key, JsonSerializer.Serialize(requestLog), new DistributedCacheEntryOptions
     {
         SlidingExpiration = _windowSize
     });

     return requestLog.Count;
 }

 private async Task IncrementRequestCountAsync(string key)
 {
     var cacheValue = await _cache.GetStringAsync(key);
     var requestLog = cacheValue == null ? new List<long>() : JsonSerializer.Deserialize<List<long>>(cacheValue);

     requestLog.Add(DateTimeOffset.UtcNow.ToUnixTimeSeconds());

     await _cache.SetStringAsync(key, JsonSerializer.Serialize(requestLog), new DistributedCacheEntryOptions
     {
         SlidingExpiration = _windowSize
     });
 }
}
Enter fullscreen mode Exit fullscreen mode

Register middleware

Now ,It is time to add this Register in Program.cs.
Add app. UseMiddleware(); in program.cs.

//...

app.UseMiddleware<RateLimitingMiddleware>();

//...
Enter fullscreen mode Exit fullscreen mode

Run the Application and Check

Set _maxRequests to lower values like 3 or 5 for checking in middleware.

Now, Run the Application

dotnet run
Enter fullscreen mode Exit fullscreen mode

You can use tools like Postman or cURL to send multiple requests and verify that after the limit is reached, the server responds with a 429 status code.

This setup ensures that each IP address is limited to a certain number of requests within a specified sliding window timeframe using Redis for distributed caching. The middleware tracks request timestamps and enforces the rate limit by checking and updating these timestamps in Redis.

Conclusion

In conclusion, rate limitation is essential for controlling API traffic, assuring equitable use, and guarding against overloads and exploitation of the system. It is responsible for preserving the API service's performance and dependability high.

Top comments (2)

Collapse
 
vishakha_jain_b094c7cb681 profile image
Vishakha Jain

Thanks It is very useful

Collapse
 
vizzv profile image
Vishw Patel

I am glad to hear that my blog is useful to you. You may find more such useful content in upcoming blog, you may consider following me.