DEV Community

Cover image for Rate limiting your goroutines
Lucas Godoy
Lucas Godoy

Posted on

4

Rate limiting your goroutines

Hey mates!
Just a quick one I like to talk about! Rate limiting goroutines.

This is about controlling the actual amount of concurrent task executions.

Sometimes, we have to process and execute a stream of long-running tasks and we don't know at runtime how many of them are coming out from the task channel. So here, the main concern is not firing all the goroutines together, at the time tasks are ingested. Otherwise, firing many of them concurrently uncontrolled could lead to unpredicted behaviors or memory overflow.

Therefore, a limiter (AKA semaphore) empty struct buffered channel has been added into the mix, capped with the count of tasks to run concurrently.

  1. For the first n-count iteration an empty struct will be pushed onto the limiter channel,
  2. A goroutine is fired up to run the incoming task.
  3. At the n-count + 1 iteration, the limiter channel will be full, hence the current main goroutine will be blocked.
  4. Once any of the currently running tasks finish its execution, it will readout of the limiter channel, to make some room for another task to be run. This will unblock the main goroutine.
  5. After the main goroutine takes the control back, it will push an empty struct onto the limiter channel and start over the cycle by running a new goroutine for the incoming task.

And so on until the time out is reached, so the for loop brakes and no more tasks are run.

package main
import (
"fmt"
"time"
)
const limit = 2 // how many tasks are able to run concurrently
var timeoutAt = 75 * time.Millisecond
func main() {
limiter := make(chan struct{}, limit)
defer close(limiter)
defer (func() {
fmt.Println("exiting runner")
})()
run(limiter)
}
func run(limiter chan struct{}) {
id := 1
for {
select {
// fill up limiter channel till reach its capacity
case limiter <- struct{}{}:
// fire up a new goroutine for the long running function
go longRunningTask(limiter, id)
id++
// when time out is reached, it breaks the for loop
case <-time.After(timeoutAt):
fmt.Println("task runner has timed out")
return
default:
}
}
}
func longRunningTask(limiter chan struct{}, id int) {
defer func() { <-limiter }() // free up after the function execution has run
fmt.Printf("executing task with ID [%v] at [%v]\n", id, time.Now())
time.Sleep(1 * time.Second) // simulate some long running task
fmt.Printf("finished task with ID [%v] at [%v]\n", id, time.Now())
}
view raw limiter.go hosted with ❤ by GitHub

To sum up, this is how we can limit goroutines to come up all together by controlling how many of them could be up and running concurrently by using a limiter buffered channel for it. Avoiding causing a memory overflow and unpredicted behavior.

Image of Timescale

🚀 pgai Vectorizer: SQLAlchemy and LiteLLM Make Vector Search Simple

We built pgai Vectorizer to simplify embedding management for AI applications—without needing a separate database or complex infrastructure. Since launch, developers have created over 3,000 vectorizers on Timescale Cloud, with many more self-hosted.

Read full post →

Top comments (2)

Collapse
 
kishanbsh profile image
Kishan B • Edited

A simpler version of the above
github.com/aviddiviner/gin-limit/b...

Collapse
 
kishanbsh profile image
Kishan B • Edited

When using the dark theme, the github gist embed does not adapt itself(screenshot below) this hurts the eye.

My suggestion,
Use markdown code blocks with go. Example below
This adapts well with the theme dark/white and has code coloring

import fmt

func main() {
    fmt.Println("Hello world")
}
Enter fullscreen mode Exit fullscreen mode

Screenshot

Postmark Image

Speedy emails, satisfied customers

Are delayed transactional emails costing you user satisfaction? Postmark delivers your emails almost instantly, keeping your customers happy and connected.

Sign up