Chapter 8: Concurrency Without Fear
Wednesday morning arrived cold and bright. Ethan descended to the archive carrying coffee and a paper bag that smelled of cinnamon and cardamom.
Eleanor looked up. "What's that scent?"
"Kanelbullar. Swedish cinnamon rolls. The baker said they're made by multiple people working together—one mixes, one shapes, one bakes."
She smiled. "Perfect timing. Today we're talking about concurrency—how to make programs do multiple things at once."
Ethan set down the coffees. "Like threads?"
"Similar, but lighter. Much lighter." Eleanor opened a new file:
package main
import (
"fmt"
"time"
)
func sayHello() {
for i := 0; i < 3; i++ {
fmt.Println("Hello")
time.Sleep(100 * time.Millisecond)
}
}
func sayWorld() {
for i := 0; i < 3; i++ {
fmt.Println("World")
time.Sleep(100 * time.Millisecond)
}
}
func main() {
sayHello()
sayWorld()
fmt.Println("Done")
}
She ran it:
Hello
Hello
Hello
World
World
World
Done
"Sequential execution. sayHello runs completely, then sayWorld runs. Simple, predictable, slow. Now watch this:"
package main
import (
"fmt"
"time"
)
func sayHello() {
for i := 0; i < 3; i++ {
fmt.Println("Hello")
time.Sleep(100 * time.Millisecond)
}
}
func sayWorld() {
for i := 0; i < 3; i++ {
fmt.Println("World")
time.Sleep(100 * time.Millisecond)
}
}
func main() {
go sayHello()
go sayWorld()
time.Sleep(400 * time.Millisecond)
fmt.Println("Done")
}
Output:
Hello
World
Hello
World
Hello
World
Done
"One word changed everything. The go keyword. go sayHello() launches sayHello as a goroutine—a lightweight concurrent function. Both functions run at the same time, their output interleaved."
Ethan stared at the screen. "That's it? Just go?"
"That's it. Goroutines are Go's fundamental unit of concurrency. They're like threads, but much cheaper. You can launch thousands—even millions—of goroutines without overwhelming your system. The Go runtime manages them efficiently."
"Why the sleep at the end of main?"
Eleanor pointed to the code. "Because main is a goroutine too. If main exits, the program ends—all other goroutines are killed, whether they've finished or not. The sleep gives them time to complete. But this is hacky. Let me show you the right way."
She opened a new file:
package main
import (
"fmt"
"time"
)
func worker(id int) {
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}
func main() {
for i := 1; i <= 3; i++ {
go worker(i)
}
time.Sleep(2 * time.Second)
fmt.Println("All workers finished")
}
Output:
Worker 1 starting
Worker 3 starting
Worker 2 starting
Worker 1 done
Worker 2 done
Worker 3 done
All workers finished
"Three workers, all running concurrently. Notice the starting order isn't guaranteed—goroutines are scheduled by the runtime, and scheduling is non-deterministic."
"But we're still using sleep to wait."
"Exactly. And that's terrible. We need a way for goroutines to communicate—to say 'I'm done' or 'here's your result.' That's what channels are for."
Eleanor typed:
package main
import (
"fmt"
"time"
)
func worker(id int, done chan bool) {
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
done <- true
}
func main() {
done := make(chan bool)
go worker(1, done)
<-done
fmt.Println("Worker finished")
}
Output:
Worker 1 starting
Worker 1 done
Worker finished
"A channel. chan bool is a channel that carries boolean values. We create it with make(chan bool). The worker sends true into the channel with done <- true. Main receives from the channel with <-done. And here's the key: receiving from a channel blocks until there's something to receive."
"So main waits?"
"Exactly. <-done blocks until the worker finishes and sends a value. No sleep, no guessing. Synchronization through communication."
Eleanor drew in her notebook:
Goroutine Communication via Channels:
Worker Goroutine Main Goroutine
| |
| do work... |
| |
| done <- true ------> <-done (blocks here)
| |
| exits | continues
"Channels are typed pipes. Data goes in one end, comes out the other. The sender and receiver synchronize automatically."
Eleanor paused. "Now, when you just need to wait for goroutines to finish—no data to pass, just synchronization—there's a cleaner pattern:"
package main
import (
"fmt"
"sync"
"time"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done()
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 3; i++ {
wg.Add(1)
go worker(i, &wg)
}
wg.Wait()
fmt.Println("All workers finished")
}
Output:
Worker 1 starting
Worker 2 starting
Worker 3 starting
Worker 1 done
Worker 2 done
Worker 3 done
All workers finished
"A WaitGroup. You call wg.Add(1) before launching each goroutine, and the goroutine calls wg.Done() when finished—usually with defer so it happens even if the function panics. Then wg.Wait() blocks until all goroutines call Done(). It's purpose-built for 'wait until all workers finish.'"
"So channels for data, WaitGroups for synchronization?"
"Exactly. Choose the right tool for the job."
She typed a new example:
package main
import "fmt"
func sum(numbers []int, result chan int) {
sum := 0
for _, num := range numbers {
sum += num
}
result <- sum
}
func main() {
numbers := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
result := make(chan int)
go sum(numbers[:5], result)
go sum(numbers[5:], result)
sum1 := <-result
sum2 := <-result
fmt.Println("Partial sums:", sum1, sum2)
fmt.Println("Total:", sum1+sum2)
}
Output:
Partial sums: 15 40
Total: 55
"Two goroutines, each summing half the slice. Both send their results back through the same channel. Main receives twice—once from each goroutine. The order doesn't matter; we get both results."
Ethan studied the code. "What if nobody receives? Or nobody sends?"
"Deadlock. The program panics. Watch:"
package main
func main() {
ch := make(chan int)
ch <- 42 // This will deadlock
}
"This tries to send to a channel with no receiver. The send blocks forever, and Go detects the deadlock at runtime."
"How do you avoid that?"
"By designing your concurrency carefully. Channels are synchronization points—senders and receivers must meet. If they don't, you've got a problem."
Eleanor opened a new file. "Now, there's another kind of channel—a buffered channel:"
package main
import "fmt"
func main() {
ch := make(chan int, 2) // Buffer size 2
ch <- 1
ch <- 2
// ch <- 3 // This would block
fmt.Println(<-ch)
fmt.Println(<-ch)
}
Output:
1
2
"make(chan int, 2) creates a channel with a buffer of size 2. You can send two values without a receiver waiting. The third send would block until someone receives."
"Why use buffered channels?"
"Performance. Unbuffered channels require both sender and receiver to be ready at the same time. Buffered channels let the sender get ahead a bit. Think of it like a mailbox—you can drop off letters even if nobody's home, as long as there's room in the box."
Eleanor typed another example:
package main
import (
"fmt"
"time"
)
func producer(ch chan int) {
for i := 0; i < 5; i++ {
fmt.Println("Producing:", i)
ch <- i
time.Sleep(100 * time.Millisecond)
}
close(ch)
}
func consumer(ch chan int) {
for val := range ch {
fmt.Println("Consuming:", val)
time.Sleep(200 * time.Millisecond)
}
}
func main() {
ch := make(chan int, 2)
go producer(ch)
consumer(ch)
}
Output:
Producing: 0
Producing: 1
Consuming: 0
Producing: 2
Producing: 3
Consuming: 1
Producing: 4
Consuming: 2
Consuming: 3
Consuming: 4
"The producer sends values into a buffered channel. The consumer reads them. Notice close(ch)—the producer closes the channel when done. The consumer uses range to read until the channel closes. This is a classic producer-consumer pattern."
"What happens if you don't close the channel?"
"The range loop waits forever for more values—eventually causing a deadlock when all goroutines are blocked. Closing signals 'no more data coming.' Only the sender should close a channel, never the receiver." Eleanor paused. "And here's important: receiving from a closed channel immediately returns the zero value and false for the ok check—it never blocks. That's how range knows to stop."
Eleanor pulled out her checking paper. "Now, the real power—selecting from multiple channels:"
package main
import (
"fmt"
"time"
)
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
go func() {
time.Sleep(1 * time.Second)
ch1 <- "Message from channel 1"
}()
go func() {
time.Sleep(2 * time.Second)
ch2 <- "Message from channel 2"
}()
for i := 0; i < 2; i++ {
select {
case msg1 := <-ch1:
fmt.Println(msg1)
case msg2 := <-ch2:
fmt.Println(msg2)
}
}
}
Output:
Message from channel 1
Message from channel 2
"The select statement. It waits on multiple channel operations simultaneously. Whichever channel is ready first, that case executes. It's like a switch statement for channels. Notice the anonymous functions—go func() {...}() defines and immediately launches a function as a goroutine."
"What if multiple channels are ready?"
"Go picks one randomly. Fair selection."
Eleanor typed one more example:
package main
import (
"fmt"
"time"
)
func main() {
ch := make(chan string)
go func() {
time.Sleep(2 * time.Second)
ch <- "Result"
}()
select {
case result := <-ch:
fmt.Println("Received:", result)
case <-time.After(1 * time.Second):
fmt.Println("Timeout: operation took too long")
}
}
Output:
Timeout: operation took too long
"select with a timeout. time.After returns a channel that sends after a duration. If the main channel doesn't produce in time, the timeout case executes. This is how you prevent goroutines from blocking forever."
Ethan was quiet for a moment. "This feels different from threads and locks."
"It is. In most languages, you share memory between threads and use locks to prevent race conditions. In Go, you share memory by communicating—channels carry the data, and synchronization happens naturally."
Eleanor opened a new file. "But here's something critical. Channels are for communication. If multiple goroutines access the same variable directly, you will have data races. Watch:"
package main
import (
"fmt"
"time"
)
func main() {
counter := 0
for i := 0; i < 1000; i++ {
go func() {
counter++ // DANGER: Data race!
}()
}
time.Sleep(time.Second)
fmt.Println("Counter:", counter)
}
"This looks innocent, but it's broken. A thousand goroutines all modifying counter at once. The result is unpredictable—sometimes you'll get 1000, sometimes less, because the increments interfere with each other."
"How do you fix it?"
"Two ways. Either send updates through a channel, or use a mutex to protect the variable. We'll cover mutexes in the next chapter—they're Go's answer to traditional locking. But the Go way is to prefer channels when you can."
Eleanor closed her laptop. "Rob Pike said it best: 'Don't communicate by sharing memory; share memory by communicating.' Channels are the mechanism. Goroutines are the workers. Together, they make concurrency manageable."
She took a cinnamon roll. "That's the foundation. Goroutines for concurrent execution, channels for communication, WaitGroups for synchronization, select for coordination. And always watch for data races when goroutines share state."
She finished her coffee. "Next time: mutexes and when channels aren't the answer. Sometimes you need traditional locks, and Go provides them—but with the same philosophy of simplicity."
Ethan gathered the cups. "Eleanor?"
"Yes?"
"You said goroutines are cheap. How cheap?"
Eleanor smiled. "A goroutine starts with about 2KB of stack space—compare that to threads in other languages, which often start with 1-2MB. The Go runtime grows and shrinks goroutine stacks dynamically. That's why you can have millions of them. They're not kernel threads—they're multiplexed onto OS threads by the Go scheduler."
"So Go manages them?"
"Completely. You write go, the runtime handles scheduling, stack management, everything. That's the power—concurrency without the complexity."
Ethan climbed the stairs, thinking about lightweight workers and typed pipes and the way communication replaced locks. In Java, he'd used threads and mutexes and semaphores. In Go, he'd just write go and pass data through channels.
Maybe that was the pattern: Go made concurrency a first-class citizen. Not an afterthought, not a library—a core part of the language. And by focusing on communication instead of shared state, it made concurrent programs easier to reason about.
Key Concepts from Chapter 8
Goroutines: Lightweight concurrent functions launched with the go keyword. Managed by the Go runtime.
The go keyword: go function() launches a function as a goroutine, running concurrently with the caller.
Channels: Typed conduits for communication between goroutines. Created with make(chan Type).
Sending to channels: ch <- value sends a value into a channel. Blocks if the channel is full (unbuffered) or if buffer is full (buffered).
Receiving from channels: value := <-ch receives a value from a channel. Blocks until a value is available.
Unbuffered channels: make(chan Type) creates a channel with no buffer. Send and receive must happen simultaneously.
Buffered channels: make(chan Type, capacity) creates a channel with a buffer. Sends don't block until buffer is full.
Closing channels: close(ch) signals no more values will be sent. Only the sender should close. Receiving from a closed channel returns the zero value and false immediately.
Range over channels: for value := range ch receives values until the channel is closed.
WaitGroups: sync.WaitGroup provides a way to wait for multiple goroutines to finish. Use Add(1) before launching, Done() when finished, and Wait() to block until all are done.
Select statement: select waits on multiple channel operations. Executes the first case that's ready. If multiple are ready, picks randomly.
Timeouts: Combine select with time.After(duration) to implement operation timeouts.
Anonymous goroutines: go func() { ... }() defines and immediately launches an anonymous function as a goroutine.
Deadlock detection: Go detects when all goroutines are blocked and panics with a deadlock error.
Goroutine lifecycle: If main exits, all goroutines are terminated immediately, whether finished or not.
Communication vs shared memory: Go's philosophy: "Don't communicate by sharing memory; share memory by communicating."
Data races: When multiple goroutines access the same variable without synchronization, you have a data race. Use channels or mutexes to prevent them.
Goroutine efficiency: Goroutines start with ~2KB of stack space and grow/shrink dynamically. You can launch millions of them.
Next chapter: Mutexes and Synchronization—where Ethan learns when channels aren't the answer, and Eleanor explains how to safely share state between goroutines.
Aaron Rose is a software engineer and technology writer at tech-reader.blog and the author of Think Like a Genius.
Top comments (1)
What I appreciate most about this chapter is how it captures the spirit of Go’s concurrency model, not just the mechanics. Most explanations treat goroutines and channels as features. This chapter treats them as a design philosophy — and honestly, that’s where Go really shines.
The story format highlights something subtle: Go’s concurrency isn’t about “making things run at the same time,” it’s about structuring the way responsibilities move through a program. The moment Eleanor introduces goroutines with that simple go keyword, the whole idea clicks — concurrency in Go isn’t forced, it’s invited.
The deeper insight here is how Go pushes developers to think in terms of communication flow instead of shared state. A lot of languages claim to support concurrency, but they hand you locks, semaphores, atomics… tools that basically say: “Here’s a dangerous box. Don’t lose a finger.” Go goes the opposite direction. It says: “Don’t fight over memory. Just talk to each other.”
And the examples in this chapter illustrate that beautifully:
Goroutines as cheap, disposable workers
Channels as typed, synchronous communication layers
WaitGroups as structured synchronization points
select as a way to orchestrate flow the same way conductors manage timing
What’s especially elegant is how the chapter shows when each tool is appropriate. A lot of people misuse channels because they fall in love with the abstraction. Here, you finally see the separation clearly:
Channels → for passing data and coordinating behavior
WaitGroups → for joining tasks without sending information
Buffered channels → for managing flow without tight coupling
Select → for concurrent choice and fairness
It’s one of the most complete “mental models” of Go concurrency I’ve seen in a narrative form.
The part about deadlocks and blocked channels is also spot-on. Too many tutorials skip past the idea that concurrency is about designing communication pathways. If nobody receives, a send blocks. If nobody sends, a receive blocks. The metaphor of “meeting points” between goroutines is exactly how Go wants you to think about it.
And then there’s the final piece: goroutine stack size and scheduling. The explanation that goroutines start at ~2KB and expand dynamically isn’t just a fun fact — it’s the key to understanding why Go can treat concurrency as a first-class construct without the overhead of OS threads. It’s why Go code feels natural where other languages feel heavy-handed.
By the time Ethan walks away thinking about “typed pipes” and “lightweight workers,” you can feel the paradigm shift happening. That’s the true value of this chapter: it doesn’t just teach concurrency, it teaches the mindset behind it.
Overall, this is one of those write-ups that helps developers see that Go isn’t trying to simplify concurrency — it’s trying to liberate it. Concurrency becomes something you design, not something you fight with.
Really impressive chapter.