Go's concurrency model is one of its biggest selling points — and at the heart of it all are channels. Channels let goroutines communicate safely without shared memory, making concurrent code cleaner and easier to reason about.
In this post, we'll walk through the patterns in this open-source project that teaches Go channels from the ground up: starting with the basics, then moving into real-world patterns like worker pools, pipelines, and rate limiters.
What Are Go Channels?
A channel is a typed conduit through which you can send and receive values between goroutines.
ch := make(chan int) // unbuffered channel
bch := make(chan int, 10) // buffered channel (capacity 10)
- Unbuffered: the sender blocks until the receiver is ready (synchronous).
- Buffered: the sender only blocks when the buffer is full (asynchronous up to capacity).
Pattern 1 — Worker Pool (Fan-Out)
The worker pool is the most common real-world channel pattern. A single master goroutine generates tasks and sends them on a shared channel. Multiple worker goroutines compete to pick up tasks from that same channel — this is called fan-out.
How it works
Master ──► [task channel] ──► Worker 1
──► Worker 2
──► Worker 3
│
▼
[results channel]
│
▼
Collector
Code walkthrough
Task definition (task.go)
type Task struct {
ID int
Data string
}
type Result struct {
TaskID int
Output string
}
Worker function
func worker(id int, tasks <-chan Task, results chan<- Result, wg *sync.WaitGroup) {
defer wg.Done()
for task := range tasks {
// simulate work
output := fmt.Sprintf("Worker %d processed task %d: %s", id, task.ID, task.Data)
results <- Result{TaskID: task.ID, Output: output}
}
}
Master (main.go)
func main() {
const numWorkers = 3
tasks := make(chan Task, 10)
results := make(chan Result, 10)
var wg sync.WaitGroup
// Spin up workers
for i := 1; i <= numWorkers; i++ {
wg.Add(1)
go worker(i, tasks, results, &wg)
}
// Send tasks
go func() {
for i := 1; i <= 9; i++ {
tasks <- Task{ID: i, Data: fmt.Sprintf("data-%d", i)}
}
close(tasks) // signal workers no more tasks
}()
// Close results when all workers are done
go func() {
wg.Wait()
close(results)
}()
// Collect results
for r := range results {
fmt.Println(r.Output)
}
}
Key takeaways
-
close(tasks)is essential — it signals all workers that no more tasks are coming and they should exit theirrangeloop. -
sync.WaitGrouptracks when all workers have finished so you can safely close the results channel. - Tasks are automatically load-balanced: whichever worker is free picks up the next task.
🔗 Pattern 2 — Pipeline
A pipeline chains goroutines together so the output of one stage becomes the input of the next. Each stage runs concurrently.
Generate ──► Stage 1 ──► Stage 2 ──► Consume
func generate(nums ...int) <-chan int {
out := make(chan int)
go func() {
for _, n := range nums {
out <- n
}
close(out)
}()
return out
}
func square(in <-chan int) <-chan int {
out := make(chan int)
go func() {
for n := range in {
out <- n * n
}
close(out)
}()
return out
}
func main() {
// Pipeline: generate → square → print
c := generate(2, 3, 4, 5)
out := square(c)
for v := range out {
fmt.Println(v) // 4, 9, 16, 25
}
}
Each stage is a function that:
- Receives a
<-chan(read-only channel) as input. - Returns a
<-chanas output. - Runs its own goroutine and closes its output channel when done.
This makes pipelines composable — you can chain as many stages as you like.
Pattern 3 — Rate Limiter
Rate limiters are critical in production systems — they prevent your service from overwhelming downstream APIs or databases.
Go's time.Tick combined with channels makes this beautifully simple:
func main() {
requests := make(chan int, 5)
for i := 1; i <= 5; i++ {
requests <- i
}
close(requests)
// Allow 1 request every 200ms
limiter := time.Tick(200 * time.Millisecond)
for req := range requests {
<-limiter // block until the next tick
fmt.Println("request", req, "processed at", time.Now())
}
}
For bursty traffic (allow a burst then throttle), use a buffered ticker channel:
burstyLimiter := make(chan time.Time, 3)
// Pre-fill with 3 burst slots
for i := 0; i < 3; i++ {
burstyLimiter <- time.Now()
}
// Refill at 1 per 200ms
go func() {
for t := range time.Tick(200 * time.Millisecond) {
burstyLimiter <- t
}
}()
Common Pitfalls
| Pitfall | What Happens | Fix |
|---|---|---|
Forgetting close(ch)
|
Workers hang forever waiting | Always close the sender side |
| Closing a channel twice | Panic | Use a sync.Once or defer carefully |
| Sending to a closed channel | Panic | Ensure only the sender closes |
| Goroutine leak | Memory builds up silently | Use context.Context for cancellation |
Running the Examples
Clone the repo and run any example:
git clone https://github.com/keyadaniel56/golang-channels
cd golang-channels
# Worker pool
cd project1-worker-pool-pattern
go run main.go task.go
# Pipeline
cd ../pipeline
go run main.go
# Rate limiter
cd ../rate-limiter
go run main.go
Summary
| Pattern | Use Case |
|---|---|
| Worker Pool | CPU-bound tasks, parallel processing, job queues |
| Pipeline | ETL, stream processing, multi-stage transformation |
| Rate Limiter | API clients, database writes, external service calls |
Channels are one of those features that take a little time to click — but once they do, you'll find yourself reaching for them constantly. The patterns in this repo are the building blocks behind real-world systems like message queues, web scrapers, and microservice workers.
⭐ If this was helpful, check out the full project on GitHub: golang-channels
Happy coding, and may your goroutines never leak! 🐹
Top comments (0)