Chapter 9: When Channels Aren't Enough
Thursday afternoon, rain pattering against the archive windows
Eleanor had rearranged the small study room—moved the desk closer to the window so they could watch the weather while they worked.
Ethan arrived with his laptop, a paper bag that smelled of fresh croissants, and two cups of Ethiopian coffee from the specialty roaster around the corner.
"You remembered," Eleanor said, accepting a cup.
"Always. The new roast this week—you'll like it." He unwrapped the croissants. "They're still warm."
Eleanor took a sip. "Perfect. Now, about mutexes..."
“Last time we talked about channels," Eleanor began, not looking up from her notebook. "The Go way. Share memory by communicating."
"Right. Send data through channels, let them synchronize."
"Exactly. But today we're going to talk about the exception." She paused. "Or rather, the complement. Sometimes channels are overkill. Sometimes you just need to protect a shared variable."
Eleanor opened a file:
package main
import (
"fmt"
"sync"
"time"
)
func main() {
var counter int
var mu sync.Mutex
for i := 0; i < 1000; i++ {
go func() {
mu.Lock()
counter++
mu.Unlock()
}()
}
time.Sleep(time.Second)
fmt.Println("Counter:", counter)
}
Output:
Counter: 1000
"A mutex. sync.Mutex. It stands for 'mutual exclusion.' You call Lock() before accessing the variable, and Unlock() after you're done. While one goroutine holds the lock, all others wait. That ensures only one goroutine modifies counter at a time."
Eleanor paused. "One note: in this example, we used time.Sleep to wait for the goroutines. That's convenient for teaching, but in production code, you'd use sync.WaitGroup to properly synchronize—we covered that last chapter."
Ethan studied the code. "So it's like... a bouncer at a club?"
"Perfect analogy. Only one person gets inside the protected section at a time. Everyone else waits in line."
"But you said channels are the Go way."
"They are, for most cases. But think about it logically: if all you're doing is incrementing a counter, spinning up a goroutine just to send a message through a channel is wasteful. A mutex is simpler, faster, and clearer about intent."
Eleanor typed:
package main
import (
"fmt"
"sync"
"time"
)
type Counter struct {
mu sync.Mutex
value int
}
func (c *Counter) Increment() {
c.mu.Lock()
c.value++
c.mu.Unlock()
}
func (c *Counter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.value
}
func main() {
counter := Counter{}
for i := 0; i < 1000; i++ {
go func() {
counter.Increment()
}()
}
// Wait for goroutines
time.Sleep(time.Second)
fmt.Println("Final:", counter.Value())
}
"Notice something here?" Eleanor pointed. "In the Value() method, I used defer. That's critical. If a panic happens between Lock() and Unlock(), the defer ensures Unlock() still runs. Otherwise you've got a deadlock."
"So defer is like a safety net?"
"Exactly. In Go, defer is how you guarantee cleanup happens. Lock and unlock are a paired operation—defer makes that explicit and safe."
Eleanor leaned back. "Now, there's a subtlety here. What if you need to read the counter a thousand times a second, but only write to it occasionally?"
"You'd still need the mutex?"
"Yes, but it's inefficient. You'd block readers even when they're just reading. That's where RWMutex comes in."
Eleanor typed a new example:
package main
import (
"fmt"
"sync"
"time"
)
type Cache struct {
mu sync.RWMutex
items map[string]string
}
func (c *Cache) Set(key, value string) {
c.mu.Lock()
defer c.mu.Unlock()
c.items[key] = value
}
func (c *Cache) Get(key string) string {
c.mu.RLock()
defer c.mu.RUnlock()
return c.items[key]
}
func main() {
cache := Cache{items: make(map[string]string)}
// Writer
go func() {
for i := 0; i < 5; i++ {
cache.Set("key", fmt.Sprintf("value-%d", i))
time.Sleep(100 * time.Millisecond)
}
}()
// Multiple readers
for i := 0; i < 3; i++ {
go func(id int) {
for j := 0; j < 10; j++ {
val := cache.Get("key")
fmt.Printf("Reader %d: %s\n", id, val)
time.Sleep(50 * time.Millisecond)
}
}(i)
}
time.Sleep(2 * time.Second)
fmt.Println("Done")
}
Output:
Reader 0:
Reader 1:
Reader 2:
Reader 0: value-0
Reader 1: value-0
Reader 2: value-0
Reader 0: value-1
Reader 1: value-1
Reader 2: value-1
Reader 0: value-2
Reader 1: value-2
Reader 2: value-2
Reader 0: value-3
Reader 1: value-3
Reader 2: value-3
Reader 0: value-4
Reader 1: value-4
Reader 2: value-4
Done
Eleanor pointed at the output. "Notice the readers start with empty values—that's because the cache starts empty and the first readers execute before the writer sets anything. After the first write, all readers see the values."
"RWMutex. Read-Write mutex. You call Lock() for exclusive access when writing. You call RLock() for shared access when reading. Multiple readers can hold RLock() simultaneously—they don't block each other. But when a writer needs Lock(), it blocks until all readers are done, and no new readers can acquire RLock() until the writer finishes."
"So readers don't interfere with each other, but writers get priority?"
"Not priority, exactly. But mutual exclusion. A writer needs exclusive access, so it waits for readers to finish, and readers wait for the writer to finish. The readers queue up fairly."
Ethan leaned forward. "When would you use mutex instead of channels?"
Eleanor pulled out her notebook and sketched:
Channels:
- Multiple goroutines need to pass data
- Natural producer-consumer relationships
- Work distribution
- Clear synchronization points
Mutexes:
- Protecting a shared variable
- Many goroutines accessing one resource
- Read-heavy workloads (use RWMutex)
- Simple state management
"Channels are about communication and orchestration. Mutexes are about protecting state. Different problems, different tools."
Eleanor turned the page of her notebook and drew two distinct diagrams.
The Channel Model
THE CHANNEL MODEL (Relay Race)
"Sharing memory by communicating"
[ Goroutine A ] [ Goroutine B ]
|
(Has Data)
|
+---------> [ CHANNEL ] >------------+
^ ^
| |
The data moves. Ownership transfers.
Synchronization happens at the hand-off.
The Mutex Model
THE MUTEX MODEL (The Restroom Key)
"Communicating by sharing memory"
[ Goroutine A ] 🔑 (Has Lock)
|
v
+------------------+
| SHARED RESOURCE | <---- [ Goroutine B ] ⛔ (Blocked/Wait)
+------------------+ <---- [ Goroutine C ] ⛔ (Blocked/Wait)
^
|
The data stays put. Access is restricted.
Synchronization happens by stopping others.
Ethan studied the sketches. "So channels are about moving data, and mutexes are about controlling access."
"Exactly right."
Eleanor opened a new file:
package main
import (
"fmt"
"sync"
"time"
)
func main() {
// BAD: Using a channel when a mutex would be better
updates := make(chan int)
go func() {
for i := 0; i < 100; i++ {
updates <- i
time.Sleep(10 * time.Millisecond)
}
close(updates)
}()
var latest int
for val := range updates {
latest = val
}
fmt.Println("Latest:", latest)
}
"This works, but it's inefficient. Every single update goes through the channel. If you have a hundred updates a second, you're spinning up channel sends for each one. Now look at the mutex version:"
package main
import (
"fmt"
"sync"
"time"
)
func main() {
// BETTER: Using a mutex for simple state
var mu sync.Mutex
var latest int
go func() {
for i := 0; i < 100; i++ {
mu.Lock()
latest = i
mu.Unlock()
time.Sleep(10 * time.Millisecond)
}
}()
time.Sleep(200 * time.Millisecond)
mu.Lock()
fmt.Println("Latest:", latest)
mu.Unlock()
}
"Simpler. Faster. Everyone sees the latest value without needing to receive from a channel. It's just protected state."
"But what about deadlocks with mutexes?"
Eleanor's expression grew serious. "Good question. Deadlocks can happen with mutexes too, just differently."
She typed:
package main
import (
"sync"
)
func main() {
var mu sync.Mutex
mu.Lock()
mu.Lock() // DEADLOCK - same goroutine tries to lock twice
}
"This is called a 'deadlock from re-entrancy.' The goroutine locks the mutex, then tries to lock it again. It blocks waiting for the lock—which it already holds. Program hangs forever."
"How do you prevent that?"
"Careful design. Always lock and unlock in the same function. If you call another function while holding a lock, make sure that function doesn't try to acquire the same lock. The pattern is: lock, do your work, unlock. No surprises."
Eleanor opened another file:
package main
import (
"fmt"
"sync"
"time"
)
type BankAccount struct {
id int // ID for consistent lock ordering
mu sync.Mutex
balance int
}
// withdrawUnsafe performs withdrawal without locking
// Must only be called when the caller holds the mutex
func (a *BankAccount) withdrawUnsafe(amount int) bool {
if a.balance >= amount {
a.balance -= amount
return true
}
return false
}
// depositUnsafe performs deposit without locking
// Must only be called when the caller holds the mutex
func (a *BankAccount) depositUnsafe(amount int) {
a.balance += amount
}
// Transfer moves money from one account to another atomically
func Transfer(from, to *BankAccount, amount int) bool {
// Always lock in the same order: by account ID
// This prevents circular waiting deadlock
if from.id < to.id {
from.mu.Lock()
defer from.mu.Unlock()
to.mu.Lock()
defer to.mu.Unlock()
} else {
to.mu.Lock()
defer to.mu.Unlock()
from.mu.Lock()
defer from.mu.Unlock()
}
// Now both locks are held. Call unsafe methods.
if from.withdrawUnsafe(amount) {
to.depositUnsafe(amount)
return true
}
return false
}
func main() {
account1 := BankAccount{id: 1, balance: 100}
account2 := BankAccount{id: 2, balance: 50}
go Transfer(&account1, &account2, 10)
go Transfer(&account2, &account1, 5)
time.Sleep(100 * time.Millisecond)
fmt.Printf("Account 1: %d, Account 2: %d\n", account1.balance, account2.balance)
}
Output:
Account 1: 95, Account 2: 55
"See what I did here?" Eleanor pointed at the code. "I extracted withdrawUnsafe and depositUnsafe—methods that don't lock. They're only meant to be called when the caller already holds the mutex. That makes the intent clear: Transfer handles all synchronization. The unsafe methods are internal details."
"But why the ID field?"
"Ah, that's the key to preventing circular waiting deadlock. When you have multiple accounts and multiple goroutines doing transfers, you could get stuck. Account A locks itself, then tries to lock Account B. Meanwhile, Account B locks itself and tries to lock Account A. They're waiting for each other—deadlock."
"So the ID fixes that?"
"By always locking in the same order—by ID—you prevent the circular waiting. Even if a hundred goroutines are doing transfers in opposite directions, they all follow the same ordering rule. One always locks the lower ID first."
Ethan studied the code. "That's clever. But complicated."
"It is. And most of the time, you avoid this problem by designing differently. Maybe Transfer is handled by a single manager goroutine, or you use channels to coordinate the transfers. But when you do need multiple locks, lock ordering is how you stay safe."
Eleanor paused. "The critical lesson: think through your locking strategy before you code. Don't discover deadlocks in production."
"What happens if you get it wrong?"
"Your program hangs. The tests pass because they don't trigger the specific sequence. Then a user runs a particular operation at scale, and boom—deadlock. That's why understanding the pattern matters."
Eleanor leaned back. "Now, there's one more thing. And this is important. Don't do this:"
package main
func main() {
var mu sync.Mutex
mu.Lock()
defer mu.Unlock()
// Lots of work here
time.Sleep(5 * time.Second)
// Everyone else is waiting
}
"Hold locks for as short a time as possible. Lock, do minimal work, unlock. If you hold a lock for five seconds while doing complex operations, you've serialized your entire program. It defeats the purpose of concurrency."
"So lock scopes matter?"
"Critically. The smaller the critical section—the code between Lock and Unlock—the better your concurrency."
Eleanor closed her laptop halfway. "Now, let me show you something that confused me when I was learning Go. There's also sync.Once."
package main
import (
"fmt"
"sync"
)
var (
instance string
once sync.Once
)
func GetInstance() string {
once.Do(func() {
instance = "initialized"
fmt.Println("Initializing...")
})
return instance
}
func main() {
for i := 0; i < 5; i++ {
go func() {
fmt.Println(GetInstance())
}()
}
time.Sleep(100 * time.Millisecond)
}
Output:
Initializing...
initialized
initialized
initialized
initialized
initialized
"sync.Once guarantees that a function runs exactly once, no matter how many goroutines call it. Perfect for initialization that should only happen once. Lazy initialization, singleton patterns—Do() handles all the synchronization for you."
Ethan smiled. "That's elegant."
"It is. And it's useful. You use it when you need to initialize something once and only once, even if multiple goroutines compete to do the initializing."
Eleanor gathered her notes. "So, to summarize: Mutexes protect shared state. RWMutex lets readers share the lock. Lock for short critical sections. Avoid deadlock by thinking through your locking order. And use Once when initialization needs to happen exactly once."
"But channels when you're communicating?"
"Channels when you're coordinating work, passing data, orchestrating goroutines. Mutexes when you're protecting a variable that multiple goroutines need to access. Choose the tool that matches the problem."
Ethan looked out at the rain. "Eleanor, why does Go have both?"
She followed his gaze to the window. "Because concurrency is complex. Go doesn't hide that complexity—it gives you good tools for different situations. Channels are beautiful for some problems. Mutexes are the right answer for others. The more you program in Go, the clearer it becomes which is which."
"And if you choose wrong?"
"You refactor. Or you learn to recognize the pattern earlier next time. That's how experience builds."
Eleanor stood, stretching. "Next time, we'll talk about the sync package more—atomic operations, conditional variables, all the other tools that exist for different synchronization needs. But this is the foundation: channels for communication, mutexes for protection."
Ethan closed his laptop. "Same time next week?"
"Same time. Bring questions." Eleanor smiled. "You're asking the right ones."
Key Concepts from Chapter 9
Mutex (sync.Mutex): A lock that provides mutual exclusion. Lock() grants exclusive access; Unlock() releases it. Only one goroutine can hold the lock at a time.
Lock and Unlock: Paired operations. Lock() blocks until available; Unlock() releases the lock. Use defer Unlock() to ensure it runs even if a panic occurs.
Critical Section: The code between Lock() and Unlock(). Keep it small—lock for short periods to maintain concurrency.
RWMutex (sync.RWMutex): Read-Write mutex for read-heavy workloads. RLock() allows multiple readers simultaneously; Lock() gives exclusive access to writers. Writers block readers and vice versa.
RLock and RUnlock: For read-only access. Multiple goroutines can hold RLock() simultaneously. Use when you have many readers and few writers.
Deadlock from Re-entrancy: A goroutine tries to Lock() a mutex it already holds. It will block forever waiting for itself to unlock.
Circular Waiting Deadlock: Two or more goroutines each hold one lock and wait for another. Prevent by always acquiring locks in the same order (e.g., by ID or address).
Lock Ordering: Always acquire multiple locks in the same consistent order across your entire program to prevent circular deadlock. Using IDs, addresses, or any consistent scheme works—the key is consistency.
Unsafe Methods: Methods that don't acquire locks themselves, meant to be called only by code that already holds the required locks. Common pattern: public methods lock, then call unsafe internal methods.
Mutex vs Channels: Use mutexes to protect shared state; use channels for communication and work coordination. Different tools for different problems.
sync.Once: Ensures a function executes exactly once, no matter how many goroutines call it. Useful for one-time initialization.
Do(func): The Once.Do() method. Safely handles all synchronization.
Data Races: When multiple goroutines access the same variable without synchronization. Use mutexes or channels to prevent them.
Eleanor's Philosophy on Synchronization
"Channels make you think about communication. Mutexes make you think about protection. Both make you think. That's the point. Concurrency is hard in every language—Go just gives you good tools and expects you to use them thoughtfully."
Next chapter: Atomic operations and lock-free programming—where Eleanor shows Ethan that sometimes you don't need a lock at all.
Aaron Rose is a software engineer and technology writer at tech-reader.blog and the author of Think Like a Genius.
Top comments (0)