Hi Everyone , You are all welcomed to my first blog:) ___
In this blog we will be exploring all best practices to limit the number of go routines and its need.
Golang concurrency model powered by go routines and channel is one of its standout features. Go routines are lightweight thread managed by go runtime , using which one can perform concurrent tasks efficiently.
However spawning too many go routines can cause problems like memory exhaustion and degrading performance.
Why do we need to limit it -
Go routines are lightweight thread but those also consume some memory so yeah its not free , because each go routine spawned consumes-
1.A small amount of memory for its stack .
2.CPU time for its execution .
3.Potential system resources like file descriptors or network sockets.
What will happen if you don’t limit it-
1.High memory usage.
2.Lead to CPU contention.
3.Overwhelm external system or APIs.
So to not face the above mentioned problems we need to limit the number of go routines in some of the critical scenarios like Bulk processing tasks, Network or file operation , or workload distributions in micro services.
Techniques to limit Go Routines -
1.Using a semaphore channel
A semaphore pattern is a simple way to limit go routines . A buffered channel acts as a semaphore limiting the no of concurrent task.
package main
import (
"fmt"
"time"
)
func worker(id int, sem chan struct{}) {
defer func() { <-sem }() // Release semaphore on function exit
fmt.Printf("Worker %d started\n", id)
time.Sleep(2 * time.Second)
fmt.Printf("Worker %d finished\n", id)
}
func main() {
const maxGoroutines = 3
sem := make(chan struct{}, maxGoroutines)
for i := 1; i <= 10; i++ {
sem <- struct{}{} // Acquire semaphore
go worker(i, sem)
}
// Wait for all workers to finish
for i := 0; i < cap(sem); i++ {
sem <- struct{}{}
}
}
As you can see in the above example we are using a buffered channel with a semaphore pattern and max number of go routines .
Here whenever we are spawning a go routine we are acquiring semaphore which will make sure that at a time there will not be spawned go routines more than max go routines , and once the go routine is done with execution we are releasing semaphore .
2. Using a Worker Pool
A worker pool process task in fixed batch sizes . It uses a channel to distribute work among a limited number of workers.
package main
import (
"fmt"
"sync"
"time"
)
func worker(id int, tasks <-chan int, wg *sync.WaitGroup) {
defer wg.Done()
for task := range tasks {
fmt.Printf("Worker %d processing task %d\n", id, task)
time.Sleep(1 * time.Second)
}
}
func main() {
const numWorkers = 3
tasks := make(chan int, 10)
var wg sync.WaitGroup
// Start worker goroutines
for i := 1; i <= numWorkers; i++ {
wg.Add(1)
go worker(i, tasks, &wg)
}
// Send tasks to workers
for i := 1; i <= 10; i++ {
tasks <- i
}
close(tasks) // Close the task channel
// Wait for workers to finish
wg.Wait()
fmt.Println("All tasks completed.")
}
In the above example you can find we have some workers defined (const numWorker =3) which is basically the no of go routines we want to spawn for this particular task.
Next we have a task channel which a buffered channel with no of tasks we need to complete .
The process work like this we have a pipeline(task channel) and we have three workers (numWorkers) , so these three workers will pick task from task channel and execute it , and this they will keep doing until there is no task left in the task channel (ie all tasks have been completed) , after that the workers will be terminated and the execution will come back to main go routine and it will print All tasks completed .
3. Rate limiting with time.Ticker
This specific technique can come handy when we need to limit the number of go routines for some time sensitive works like Real time data processing , Payment gateways , Gaming applications etc.
package main
import (
"fmt"
"time"
)
func worker(id int) {
fmt.Printf("Worker %d started\n", id)
time.Sleep(1 * time.Second)
fmt.Printf("Worker %d finished\n", id)
}
func main() {
const maxRate = 2 // Goroutines per second
ticker := time.NewTicker(time.Second / maxRate)
defer ticker.Stop()
for i := 1; i <= 10; i++ {
<-ticker.C // Wait for the next tick
go worker(i)
}
// Allow time for all goroutines to finish
time.Sleep(10 * time.Second)
fmt.Println("All workers completed.")
}
The above example basically limits how frequently go routines(workers) started. It limits the rate of starting go routines to 2 per second using ticker.
Here how it happens -
- At 0.5 sec worker 1 starts.
- At 1 sec worker 2 starts and 1 still running.
- By 5 sec all the workers are started while some of them are still finishing their tasks.
- At last the program waits long enough (10 sec) for all workers to complete before existing.
That’s all for this blog :)
Hope you have gained something out of it .
See you in next one …:)
Top comments (0)