DEV Community

Pavel Sanikovich
Pavel Sanikovich

Posted on

Go From Zero to Depth — Part 5: Concurrency in Go Is Easy — Until You Get It Wrong

Concurrency is the reason many people choose Go. You write go func() {} once, and suddenly your program feels modern, fast, and powerful. Goroutines are cheap. Channels feel intuitive. The syntax is friendly. Everything seems to work — until one day it doesn’t. And when it breaks, it breaks in ways that are deeply confusing for beginners.

This is not a flaw in Go. It’s a consequence of how concurrency really works.

Most beginner tutorials teach concurrency as a mechanical skill. Start a goroutine. Send data through a channel. Use a WaitGroup. Avoid races. These rules are useful, but they don’t explain why certain patterns are safe and others are dangerous. Without a mental model, concurrency becomes cargo cult programming: copy a pattern, hope it works, and pray nothing strange happens in production.

To understand concurrency in Go, you need to stop thinking in terms of threads and start thinking in terms of coordination and ownership.

Let’s begin with the illusion.

go func() {
    fmt.Println("Hello from goroutine")
}()
Enter fullscreen mode Exit fullscreen mode

This looks harmless. It usually works. But what actually happened? You did not “start a thread.” You asked the Go runtime to schedule a function for concurrent execution. You don’t control when it runs. You don’t control where it runs. You don’t even know if it runs at all before the program exits.

This is the first important idea: goroutines are not threads, and concurrency is not parallelism. Goroutines are tasks. The runtime decides how and when they execute. Sometimes they run in parallel. Sometimes they don’t. Sometimes thousands of them share a single OS thread.

Once you accept that loss of control, things start to make sense.

Now consider shared memory:

var x int

go func() {
    x = 42
}()

fmt.Println(x)
Enter fullscreen mode Exit fullscreen mode

This code compiles. It may print 42. It may print 0. It may work for years and then break after a minor refactor. This is not a timing issue. It’s a memory visibility issue.

Concurrency is not just about doing things at the same time. It’s about when changes become visible to other parts of the program. Without synchronization, Go makes no promises. This is why data races are not “bugs you sometimes see.” They are undefined behavior.

Beginners often try to fix this by adding sleep calls:

time.Sleep(time.Millisecond)
fmt.Println(x)
Enter fullscreen mode Exit fullscreen mode

This is not synchronization. This is gambling.

The correct fix is not “wait longer.” The correct fix is to establish a happens-before relationship. In Go, this usually means communication.

done := make(chan struct{})

go func() {
    x = 42
    close(done)
}()

<-done
fmt.Println(x)
Enter fullscreen mode Exit fullscreen mode

Now the program has a guarantee. The receive on done happens after the write to x. This is not a convention. It is a formal property of Go’s memory model.

This leads to one of the most important ideas in Go concurrency: don’t share memory by communicating — communicate by sharing memory. This sentence is often quoted, but rarely understood. It does not mean “never use shared variables.” It means that coordination should happen through communication, not through ad-hoc shared state.

Channels are not queues. They are synchronization points.

Consider this example:

ch := make(chan int)

go func() {
    ch <- 42
}()

v := <-ch
fmt.Println(v)
Enter fullscreen mode Exit fullscreen mode

The channel does two things at once. It transfers a value, and it synchronizes execution. The send cannot complete until the receive happens. This creates a clear ordering in time and memory. That ordering is the real value.

Problems begin when beginners treat channels as generic data pipes without thinking about ownership.

go producer(ch)
go consumer(ch)
go consumer(ch)
Enter fullscreen mode Exit fullscreen mode

Who owns the channel? Who closes it? What happens if one consumer exits early? These are not stylistic questions. They are correctness questions. Every concurrent design must answer them explicitly.

Another common beginner mistake is assuming goroutines are “fire and forget.”

for _, v := range values {
    go process(v)
}
Enter fullscreen mode Exit fullscreen mode

This looks elegant, but it hides several problems. How many goroutines are you spawning? What happens if process blocks? What if it panics? What if it allocates memory faster than the GC can keep up? Concurrency magnifies every inefficiency.

This is why experienced Go developers often prefer boring patterns: worker pools, bounded concurrency, explicit lifetimes. Not because they love ceremony, but because these patterns make ownership and limits visible.

Here is a safer pattern:

sem := make(chan struct{}, 10)

for _, v := range values {
    sem <- struct{}{}
    go func(val int) {
        defer func() { <-sem }()
        process(val)
    }(v)
}
Enter fullscreen mode Exit fullscreen mode

Now concurrency is bounded. Memory usage is predictable. Failure modes are controlled. This is what “thinking concurrently” actually means.

Concurrency also interacts deeply with everything you learned earlier. Closures extend lifetimes. Pointers introduce shared state. Escape analysis moves values to the heap. Goroutines delay execution. Combine these carelessly, and you get subtle bugs that are impossible to reason about without a solid mental model.

This is why Go concurrency feels easy at first and hard later. The syntax removes friction, but the responsibility remains. Go does not protect you from bad design; it gives you tools that make good design explicit.

The goal is not to avoid concurrency. The goal is to structure it so that ownership is clear, lifetimes are bounded, and communication defines order. When you do that, Go concurrency becomes not only safe, but elegant.

In the next part, we’ll go one level deeper and look at what the runtime actually does when you start a goroutine. We’ll explore the scheduler, the M–P–G model, and why understanding it changes how you write concurrent code. This is where many developers finally stop guessing and start predicting behavior.

Concurrency is not magic. It’s discipline, expressed through a language that rewards clarity.

Want to go further?

This series focuses on understanding Go, not just using it.

If you want to continue in the same mindset, Educative is a great next step.

It’s a single subscription that gives you access to hundreds of in-depth, text-based courses — from Go internals and concurrency to system design and distributed systems. No videos, no per-course purchases, just structured learning you can move through at your own pace.

👉 Explore the full Educative library here

Top comments (0)