Concurrency is often cited as the primary reason developers choose Go. But coming from languages with traditional threading models (like Java or C++), Go’s approach can feel like a paradigm shift.
It’s not just about "running things at the same time." It is about designing your program as a composition of independently executing processes.
In this guide, we’ll look at how Go handles concurrency, starting with the basics and ending with a real-world pattern.
- Concurrency vs. Parallelism Before writing code, we need to clear up a common misconception.
Concurrency is about dealing with lots of things at once. It's a structure.
Parallelism is about doing lots of things at once. It's an execution.
Go gives you the tools to write concurrent programs that can run in parallel, but they don't have to.
- The Goroutine A Goroutine is a lightweight thread managed by the Go runtime. They are incredibly cheap—you can easily spin up tens of thousands of them on a modest laptop.
To start one, you just use the go keyword.
package main
import (
"fmt"
"time"
)
func speak(word string) {
for i := 0; i < 3; i++ {
fmt.Println(word)
time.Sleep(100 * time.Millisecond)
}
}
func main() {
// Runs in a new goroutine
go speak("Goroutine says: Hi!")
// Runs in the main goroutine
speak("Main says: Hello!")
// Give the goroutine time to finish before main exits
time.Sleep(1 * time.Second)
}
The Catch: If the main function exits, the program kills all other running goroutines immediately. We need a way to synchronize them.
- Synchronization with WaitGroups Using time.Sleep to wait for goroutines is hacky. The correct way to wait for a collection of goroutines to finish is sync.WaitGroup.
package main
import (
"fmt"
"sync"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done() // Decrement the counter when the function exits
fmt.Printf("Worker %d starting\n", id)
// Simulate work
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 3; i++ {
wg.Add(1) // Increment the counter
go worker(i, &wg)
}
wg.Wait() // Block until the counter goes back to 0
fmt.Println("All workers done")
}
- Channels: "Share Memory By Communicating" This is Go’s golden rule. Instead of locking variables with Mutexes (though Go has those too), you pass data between goroutines using channels.
Think of a channel as a pipe. One goroutine puts data in one end, and another picks it up from the other.
Unbuffered Channels
Unbuffered channels block the sender until the receiver is ready (and vice versa). This provides implicit synchronization.
package main
import "fmt"
func main() {
messages := make(chan string)
go func() {
messages <- "ping" // Blocks here until "pong" is ready to receive
}()
msg := <-messages // Blocks here until "ping" is sent
fmt.Println(msg)
}
- Pattern: The Worker Pool Let’s put it all together. A common pattern in backend development is the Worker Pool. You have a queue of jobs and a fixed number of workers processing them concurrently.
This is useful when you have 10,000 tasks but only want to process 5 at a time to save CPU/Memory.
package main
import (
"fmt"
"time"
)
// A job to be processed
func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
fmt.Println("worker", id, "started job", j)
time.Sleep(time.Second) // Simulate expensive task
fmt.Println("worker", id, "finished job", j)
results <- j * 2
}
}
func main() {
const numJobs = 5
jobs := make(chan int, numJobs)
results := make(chan int, numJobs)
// 1. Start 3 workers
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
// 2. Send 5 jobs to the jobs channel
for j := 1; j <= numJobs; j++ {
jobs <- j
}
close(jobs) // Close channel to signal no more jobs
// 3. Collect results
for a := 1; a <= numJobs; a++ {
<-results
}
}
Common Pitfalls
Deadlocks: If a goroutine is waiting for a channel that no one is writing to, Go will panic with a deadlock error.
Race Conditions: If two goroutines access the same variable without a lock or channel, the result is unpredictable. always run your tests with go test -race to catch these.
Leaking Goroutines: If you start a goroutine but it gets stuck waiting for a channel forever, it will never be garbage collected. This is a memory leak.
Summary
Go’s concurrency model is powerful because it abstracts the complexity of OS threads. By using Goroutines for execution and Channels for communication, you can build highly scalable systems that are easier to reason about than traditional threaded code.
Top comments (0)