Problems this pattern can solve:
- The external API only allows 5 concurrent requests. Any more — and it bans by IP.
- You have a database connection pool of 10. If you spawn 100 goroutines, each will try to acquire a connection — 90 will be waiting and consuming memory.
- A microservice collapses under load. You need to limit the number of concurrent requests to it, and when the limit is exceeded — quickly return an error without overloading the service.
Essence: A synchronization mechanism that uses a counter to limit the number of concurrently executing operations or access to a resource. Goroutines "acquire" the semaphore before starting work and "release" it upon completion, while the semaphore blocks new acquisitions when the counter reaches zero.
Key idea: A permission counter that blocks execution when the limit is exhausted and allows it when there are free slots.
Use the official package golang.org/x/sync/semaphore except when you need a combination with other channel-based patterns or require specific behavior.
Example (Simplified)
// Semaphore = buffered channel with empty structs
sem := make(chan struct{}, N)
sem <- struct{}{} // Acquire: take a permit (blocks if full)
<-sem // Release: return a permit
Semaphore Disadvantages:
- No priorities. Semaphore does not guarantee access order (FIFO)
- Deadlock risk. Careful defer required.
- No ownership. Unlike a mutex, a semaphore can be released by any goroutine, not only the one that acquired it.
Semaphore vs Other Patterns:
Worker Pool
- Worker Pool: Manages goroutines that execute tasks. Workers live permanently.
- Semaphore: Manages access to a resource. Goroutines are created per task but are blocked by the semaphore before the "heavy" operation.
- Key difference: Semaphore does not create goroutines, it only limits their concurrent execution.
Mutex (sync.Mutex)
- Mutex: Binary (0 or 1), protects a critical section from simultaneous access.
- Semaphore: Can be counting (N > 1), manages the number of concurrent accesses.
- Key difference: Mutex is for mutual exclusion, semaphore is for limiting concurrency.
Rate Limiter
- Rate Limiter: Limits the number of operations per time unit (e.g., 100/sec).
- Semaphore: Limits the number of concurrent operations (e.g., 10 concurrent requests).
- Key difference: Rate limiter works with a time window, semaphore works with concurrency.
Channels
- Channels: Pass data between goroutines, can be used as semaphores.
- Semaphore: Specialized primitive for synchronization only, without data passing.
- Key difference: Semaphore is lighter and faster for pure access limiting.
Example
package main
import (
"fmt"
"sync"
"time"
)
// Semaphore - concurrency limiter
type Semaphore struct {
ch chan struct{}
}
func NewSemaphore(maxConcurrent int) *Semaphore {
return &Semaphore{
ch: make(chan struct{}, maxConcurrent),
}
}
// Acquire - get a permit (blocks if limit is exceeded)
func (s *Semaphore) Acquire() {
s.ch <- struct{}{} // Send blocks when channel is full
}
// Release - return permit to the pool
func (s *Semaphore) Release() {
<-s.ch // Release slot
}
func worker(id int, sem *Semaphore, wg *sync.WaitGroup) {
defer wg.Done()
sem.Acquire() // Wait for free slot
defer sem.Release() // Release after work
fmt.Printf("[%s] Worker %d: started work\n", time.Now().Format("15:04:05"), id)
time.Sleep(2 * time.Second) // Simulate work
fmt.Printf("[%s] Worker %d: finished\n", time.Now().Format("15:04:05"), id)
}
func main() {
sem := NewSemaphore(3) // Maximum 3 concurrent tasks
var wg sync.WaitGroup
// Start 10 goroutines, but only 3 will be active
for i := 1; i <= 10; i++ {
wg.Add(1)
go worker(i, sem, &wg)
time.Sleep(200 * time.Millisecond) // Small delay between launches
}
wg.Wait()
fmt.Println("All tasks completed")
}

Top comments (0)