DEV Community

Discussion on: Implement rate limit in Golang

Collapse
 
davidkroell profile image
David Kröll

Hi, thanks for sharing! I noticed some issues with your code and try to suggest a solution here. Talking about the design of this rate limiter, I'd say it's rather a 1 request per 100ms than a 10 req/sec rate limiter, as you can see in a slightly modified version here:

I use time.Since() to calculate when a request is handled. You see in the output that every 100ms a request is handled. If that is the desired behaviour, it's fine. On the other hand, using this solution, coping with peaks in the requests is not possible. For example doing 5 requests concurrently still requires 500ms, although there should be 10 per second allowed.

In addition, memory from the rate variable is never cleaned up. That's not a problem here (because it's a global variable and lives until the program does), but I just wanted to mention it.

package main

import (
    "fmt"
    "sync"
    "time"
)

var limit = 10

var rate = time.Tick(time.Second / time.Duration(limit))
var startTime = time.Now()

func main() {
    totalRequests := 100
    var wg sync.WaitGroup
    wg.Add(totalRequests)

    // Rate limit to 10 requests per second
    for i := 0; i < totalRequests; i++ {
        go func(i int) {
            defer wg.Done()
            sendRequest(i + 1)
        }(i)
    }

    // Wait until all request are completed
    wg.Wait()
}

func sendRequest(i int) {
    <-rate // Wait for the next tick
    fmt.Printf("Completed request %3d at %s\n", i, time.Since(startTime))
}
Enter fullscreen mode Exit fullscreen mode

I done a slight refactoring using the token-bucket rate limiting approach with channels. The bucket is a buffered channel with 10 entries and every request receives a token from it. In another goroutine, the bucket is filled by 10 items every second. If there is no token left, the request blocks until it's filled up again. This solutions handles peaks very well as you can see in the output. A batch of 10 requests is handled the same time and then everything waits for a second.

package main

import (
    "fmt"
    "sync"
    "time"
)

var limit = 10

var bucket = make(chan struct{}, limit)
var startTime = time.Now()

func main() {
    totalRequests := 100
    var wg sync.WaitGroup
    wg.Add(totalRequests)

    go func() {
        for {
            // the bucket refill routine
            for i := 0; i < limit; i++ {
                bucket <- struct{}{}
            }
            time.Sleep(time.Second)
        }
    }()

    // Rate limit to 10 requests per second
    for i := 0; i < totalRequests; i++ {
        go func(i int) {
            defer wg.Done()
            sendRequest(i + 1)
        }(i)
    }

    // Wait until all request are completed
    wg.Wait()
}

func sendRequest(i int) {
    <-bucket // get "token" from the bucket
    fmt.Printf("Completed request %3d at %s\n", i, time.Since(startTime))
}
Enter fullscreen mode Exit fullscreen mode
Collapse
 
jacktt profile image
Jack

@davidkroell thank you for your incredibly useful reply.
You're right. My solution may be suitable in some cases, but if the goal is to perform many requests concurrently while maintaining a specific rate limit, then your solution is excellent.