DEV Community

Cover image for Adding API Rate Limiting to Your Go API
Neel Patel
Neel Patel

Posted on

Adding API Rate Limiting to Your Go API

Alright, folks, we’ve covered a lot so far: JWT authentication, database connections, logging, and error handling. But what happens when your API starts getting slammed with requests? Without control, high traffic can lead to slow response times or even downtime. 😱

We’re going to solve that by implementing rate limiting to control the flow of traffic. We’ll be using the simple and effective golang.org/x/time/rate package. Later, when my own ThrottleX solution is ready, I’ll show you how to integrate that as a more scalable option. (Psst, check out my GitHub at github.com/neelp03/throttlex for updates! Feel free to comment any issues you see in there o7)

Why Rate Limiting? 🚦

Rate limiting is like a bouncer for your API—it controls the number of requests users can make within a given timeframe. This prevents your API from getting overwhelmed, ensuring smooth and fair access for all users. Rate limiting is essential for:

  • Preventing Abuse: Stops bad actors or overly enthusiastic users from overwhelming your API.
  • Stability: Keeps your API responsive and reliable, even during traffic spikes.
  • Fairness: Allows resources to be shared equally among users.

Step 1: Installing the time/rate Package

The golang.org/x/time/rate package is part of the extended Go libraries and provides a straightforward token-based rate limiter. To get started, you’ll need to install it:

go get golang.org/x/time/rate
Enter fullscreen mode Exit fullscreen mode

Step 2: Setting Up the Rate Limiter

Let’s create a rate-limiting middleware that controls the number of requests a client can make. In this example, we’ll limit clients to 5 requests per minute.

package main

import (
    "net/http"
    "golang.org/x/time/rate"
    "sync"
    "time"
)

// Create a struct to hold each client's rate limiter
type Client struct {
    limiter *rate.Limiter
}

// In-memory storage for clients
var clients = make(map[string]*Client)
var mu sync.Mutex

// Get a client's rate limiter or create one if it doesn't exist
func getClientLimiter(ip string) *rate.Limiter {
    mu.Lock()
    defer mu.Unlock()

    // If the client already exists, return the existing limiter
    if client, exists := clients[ip]; exists {
        return client.limiter
    }

    // Create a new limiter with 5 requests per minute
    limiter := rate.NewLimiter(5, 1)
    clients[ip] = &Client{limiter: limiter}
    return limiter
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Creating the Rate Limiting Middleware

Now, let’s use the getClientLimiter function in a middleware that will restrict access based on the rate limit.

func rateLimitingMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        ip := r.RemoteAddr
        limiter := getClientLimiter(ip)

        // Check if the request is allowed
        if !limiter.Allow() {
            http.Error(w, "Too Many Requests", http.StatusTooManyRequests)
            return
        }

        next.ServeHTTP(w, r)
    })
}
Enter fullscreen mode Exit fullscreen mode

How It Works:

  1. IP-Based Limiting: Each client is identified by their IP address. We check the client’s IP and assign a rate limiter to it.
  2. Request Check: The limiter.Allow() method checks if the client is within the rate limit. If they are, the request proceeds to the next handler; if not, we respond with 429 Too Many Requests.

Step 4: Applying the Middleware Globally 🔗

Now let’s hook up our rate limiter to the API so every request has to pass through it:

func main() {
    db = connectDB()
    defer db.Close()

    r := mux.NewRouter()

    // Apply rate-limiting middleware globally
    r.Use(rateLimitingMiddleware)

    // Other middlewares
    r.Use(loggingMiddleware)
    r.Use(errorHandlingMiddleware)

    r.HandleFunc("/login", login).Methods("POST")
    r.Handle("/books", authenticate(http.HandlerFunc(getBooks))).Methods("GET")
    r.Handle("/books", authenticate(http.HandlerFunc(createBook))).Methods("POST")

    fmt.Println("Server started on port :8000")
    log.Fatal(http.ListenAndServe(":8000", r))
}
Enter fullscreen mode Exit fullscreen mode

By applying r.Use(rateLimitingMiddleware), we ensure that every incoming request is checked by the rate limiter before it reaches any endpoint.


Step 5: Testing the Rate Limiting 🧪

Start your server:

go run main.go
Enter fullscreen mode Exit fullscreen mode

Now, let’s hit the API with some requests. You can use a loop with curl to simulate multiple requests in a row:

for i in {1..10}; do curl http://localhost:8000/books; done
Enter fullscreen mode Exit fullscreen mode

Since we set the limit to 5 requests per minute, you should see 429 Too Many Requests responses once you exceed the allowed rate.


What’s Next?

And there you have it—rate limiting with golang.org/x/time/rate to keep your API stable and responsive under pressure. Rate limiting is a crucial tool for any scalable API, and we’re just scratching the surface here.

Once ThrottleX is production-ready, I’ll be posting a follow-up tutorial to show you how to integrate it into your Go API for even more flexibility and distributed rate limiting. Keep an eye on my ThrottleX GitHub repo for updates!

Next time, we’re going to containerize our API with Docker, so it’s ready to run anywhere. Stay tuned, and happy coding! 🐳🚀

Top comments (0)