Concurrency is one of Go’s greatest strengths, but without proper control it can quickly overload your system and drain resources.
The Semaphore Pattern provides a solution by limiting how many goroutines can run concurrently, keeping applications stable and resource usage safe.
What Is a Semaphore?
A semaphore is a synchronization mechanism used to control concurrent access to a shared resource. A simple analogy is a traffic light: only a certain number of cars can pass at a time, while the rest must wait for their turn.
In the same way, a semaphore limits how many processes or goroutines can run concurrently, preventing the system from becoming overloaded.
Why Use a Semaphore?
In modern applications, it is often necessary to limit concurrency, for example :
Rate limiting API requests to avoid being throttled or blocked by an external service.
Controlling database connections so they don’t exceed the capacity of the connection pool.
Maintaining system stability under high load to prevent resource exhaustion.
Without concurrency control, applications risk spikes in CPU, memory, and network usage, which can lead to crashes or severe performance degradation.
Semaphores in Go
Go does not provide a built-in semaphore implementation. However, there are two common approaches :
1. Buffered Channel
Use a channel with a fixed capacity to limit the number of goroutines running concurrently. This approach is simple and works without extra dependencies.
2. golang.org/x/sync/semaphore
An external package that provides a more feature-rich semaphore, for example :
Supports weighted semaphores (a task can acquire more than one “slot”).
Tight integration with
context.Context
→ enables cancellation and timeout handling.
Case Study: Limiting Bulk Email Sending
Imagine a marketplace needs to send 10,000 New Year promotional emails.
If all emails are sent at once :
The SMTP provider may throttle or block requests.
The application server will experience a resource spike.
There is a high risk of being blacklisted as spam.
The result: campaign failure and lost business opportunities.
Solution with Semaphore
By applying a semaphore, we can limit how many emails are sent concurrently, for example 50 emails per batch.
Other goroutines wait until slots are available.
The system remains stable and controlled.
The campaign runs smoothly without sacrificing performance.
Implementing a Semaphore in Go
1. Using a Buffered Channel
A straightforward idiomatic approach in Go. Each goroutine must acquire a slot from the channel before executing and release it when done.
package main
import (
"fmt"
"sync"
"time"
)
func main() {
totalEmails := 10000
maxConcurrent := 50
sem := make(chan struct{}, maxConcurrent) // max 50 emails concurrently
var wg sync.WaitGroup
for i := 1; i <= totalEmails; i++ {
wg.Add(1)
sem <- struct{}{} // acquire slot
go func(uid int) {
defer wg.Done()
defer func() { <-sem }() // release slot
SendPromoEmail(uid) // simulate send promo email
}(i)
}
wg.Wait() // wait for all goroutines to complete
}
func SendPromoEmail(uid int) {
// Simulate email sending
fmt.Printf("Sending email to user %d\n", uid)
time.Sleep(100 * time.Millisecond)
}
2. Using sync/semaphore
For more advanced use cases, the external package provides better flexibility.
package main
import (
"context"
"fmt"
"sync"
"time"
"golang.org/x/sync/semaphore"
)
func main() {
totalEmails := 10000
maxConcurrent := 50
ctx := context.Background()
sem := semaphore.NewWeighted(int64(maxConcurrent))
var wg sync.WaitGroup
for i := 1; i <= totalEmails; i++ {
wg.Add(1)
if err := sem.Acquire(ctx, 1); err != nil {
fmt.Printf("Failed to acquire semaphore: %v\n", err)
wg.Done()
continue
}
go func(uid int) {
defer wg.Done()
defer sem.Release(1) // release slot
SendPromoEmail(uid) // simulate send promo email
}(i)
}
wg.Wait() // wait for all goroutines to complete
}
func SendPromoEmail(uid int) {
// Simulate email sending
fmt.Printf("Sending email to user %d\n", uid)
time.Sleep(100 * time.Millisecond)
}
Other Use Cases
Besides bulk email sending, semaphores are useful in many scenarios, such as:
1. Rate Limiting API Calls
Restricting requests to an external API to avoid exceeding rate limits or being throttled.
2. Batch Job Processing
Controlling how many batch jobs (report generation, ETL, background tasks) run simultaneously to prevent server overload.
3. File I/O or Network Calls
Limiting the number of heavy I/O operations (file uploads/downloads, syncing data to cloud storage) so bandwidth and disk usage remain stable.
Best Practices
When using semaphores in Go, consider these best practices :
Choose the right limit : Match the semaphore size to your system’s capacity.
Use Context: Always handle cancellations with
context.Context
.Avoid deadlocks : Always release the semaphore, preferably with
defer
.Monitor performance: Profile your application to tune semaphore size.
Limitations and Alternatives
While semaphores are powerful, there are important limitations:
1. Memory Overhead with Large Workloads
If you spawn one goroutine per task and limit them with a semaphore, memory usage can grow significantly when the workload reaches hundreds of thousands of tasks.
Solution : use a worker pool to keep the number of goroutines constant, while tasks are queued and processed by workers in a controlled manner.
2. Distributed System Limitations
Semaphores only control concurrency within a single program instance.
In a distributed system with many instances (e.g., multiple email-sending services running in parallel in a cluster), a semaphore cannot enforce concurrency limits across nodes.
Solution :
Use a distributed lock (e.g., Redis) to coordinate concurrency across nodes. This ensures all instances share a global limit so that only a certain number of tasks run across the entire cluster at the same time.
3. System Reliability
Semaphores even with distributed locks like Redis only act as a concurrency control mechanism. They do not guarantee task reliability.
Things not handled by semaphores or Redis:
Retry & Failure Recovery → there is no built in retry mechanism.
Task Persistence → if the application crashes, in-progress tasks are lost.
Delivery Guarantee → no assurance that tasks are processed at-least-once or exactly-once.
For high reliability, use a Message Broker such as Kafka, RabbitMQ, or NATS.
Tasks are stored durably, surviving application crashes.
Built-in support for retries and delivery guarantees.
Workers can consume tasks at their own pace, keeping load stable.
Conclusion
Semaphores are a simple pattern for controlling concurrency at the application level. They are ideal for small to medium workloads where local resource stability is the main concern.
For very large workloads, worker pools are more efficient than spawning massive numbers of goroutines with semaphores.
In distributed systems, semaphores should be combined with distributed coordination (e.g., Redis) to enforce concurrency limits across nodes.
For high reliability, use a message broker (Kafka, RabbitMQ, NATS). Semaphores remain valuable for local concurrency control, while brokers ensure durability, retries, and stable task distribution.
Credits
Car icons created by Vectors Market - Flaticon
Electric car icons created by kosonicon - Flaticon
Traffic light icons created by xnimrodx - Flaticon
Promo icons created by Freepik - Flaticon
People icons created by Vitaly Gorbachev - Flaticon
Marketplace icons created by Dewi Sari - Flaticon
Top comments (0)