DEV Community

Cover image for Concurrency in Go Without the PhD: Patterns That Actually Work
Gabriel Anhaia
Gabriel Anhaia

Posted on

Concurrency in Go Without the PhD: Patterns That Actually Work


Hot take: 90% of Go concurrency tutorials are actively harmful. They teach you the mechanics of channels without teaching you when NOT to use them. That's like teaching someone to drive by only explaining the gas pedal.

This post shows 4 real concurrency patterns with complete, copy-pasteable examples. Each pattern includes "when to use this" AND "when NOT to use this." No toy counter examples. No philosopher dining problems. Just the patterns you'll reach for in actual production code.

The Mental Model (Keep It Simple)

Goroutines are lightweight threads managed by the Go runtime. Each one starts with a few KB of stack space, so spinning up thousands of them is totally fine. Channels are how goroutines talk to each other and coordinate. That's the whole model. Goroutines do work; channels move data between them.

goroutine A ---> [channel] ---> goroutine B
goroutine C ---/
Enter fullscreen mode Exit fullscreen mode

You don't need to understand the scheduler, the GMP model, or any of that to write solid concurrent Go. You just need patterns -- reusable structures that handle the coordination so you can focus on the actual business logic.

Pattern 1: Worker Pool

This is the pattern you'll use most. You have N tasks and want to process them with M workers running in parallel. Think batch processing, parallel API calls, image resizing -- anything where you have a pile of independent work.

package main

import (
    "fmt"
    "math/rand"
    "sync"
    "time"
)

func main() {
    urls := []string{
        "https://api.example.com/users/1",
        "https://api.example.com/users/2",
        "https://api.example.com/users/3",
        "https://api.example.com/users/4",
        "https://api.example.com/users/5",
        "https://api.example.com/users/6",
        "https://api.example.com/users/7",
        "https://api.example.com/users/8",
        "https://api.example.com/users/9",
        "https://api.example.com/users/10",
    }

    numWorkers := 3
    jobs := make(chan string, len(urls))
    results := make(chan string, len(urls))

    var wg sync.WaitGroup

    // Start workers
    for w := 1; w <= numWorkers; w++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for url := range jobs {
                result := fetchURL(w, url)
                results <- result
            }
        }()
    }

    // Send jobs
    for _, url := range urls {
        jobs <- url
    }
    close(jobs)

    // Wait for all workers, then close results
    go func() {
        wg.Wait()
        close(results)
    }()

    // Collect results
    for result := range results {
        fmt.Println(result)
    }
}

func fetchURL(workerID int, url string) string {
    // Simulate HTTP request with variable latency
    time.Sleep(time.Duration(100+rand.Intn(400)) * time.Millisecond)
    return fmt.Sprintf("worker %d fetched %s", workerID, url)
}
Enter fullscreen mode Exit fullscreen mode

The structure is always the same: a jobs channel feeds work in, a results channel collects output, and a sync.WaitGroup tells you when everyone's done. The buffered channels prevent goroutines from blocking unnecessarily. Closing the jobs channel signals workers to stop -- the range loop exits when the channel is closed and drained. Note that the goroutine closures reference w directly -- since Go 1.22, loop variables are per-iteration, so each goroutine gets its own copy. No need for the old go func(id int) { ... }(w) workaround.

When to use this: Batch processing, parallel API calls, any situation where you have many independent tasks and want to limit concurrency. The worker count gives you a natural throttle -- set it to 3 and you'll never hit more than 3 endpoints at once.

When NOT to use this: If you have fewer than ~100 items and each operation is fast (under a millisecond), the overhead of channels and goroutines isn't worth it. A plain for loop will be simpler and just as fast. Also skip this when tasks depend on each other's results -- that's sequential work dressed up as parallelism.

One common mistake: setting numWorkers too high. If you're calling an external API, 3-10 workers is usually right. If you're doing CPU-bound work, runtime.NumCPU() is a good starting point. More workers than that just means more goroutines fighting for the same resources.

Pattern 2: Fan-Out, Fan-In

Fan-out means launching multiple goroutines to do different work. Fan-in means funneling all their results back into a single channel. This is your go-to when you need to aggregate data from multiple independent sources.

package main

import (
    "fmt"
    "math/rand"
    "sync"
    "time"
)

type PriceQuote struct {
    Provider string
    Price    float64
    Err      error
}

func main() {
    quotes := fetchAllQuotes("AAPL")

    for _, q := range quotes {
        if q.Err != nil {
            fmt.Printf("  %s: error - %v\n", q.Provider, q.Err)
            continue
        }
        fmt.Printf("  %s: $%.2f\n", q.Provider, q.Price)
    }
}

func fetchAllQuotes(symbol string) []PriceQuote {
    providers := []struct {
        name    string
        fetcher func(string) (float64, error)
    }{
        {"Bloomberg", fetchFromBloomberg},
        {"Reuters", fetchFromReuters},
        {"Yahoo", fetchFromYahoo},
    }

    results := make(chan PriceQuote, len(providers))
    var wg sync.WaitGroup

    // Fan-out: one goroutine per provider
    for _, p := range providers {
        wg.Add(1)
        go func() {
            defer wg.Done()
            price, err := p.fetcher(symbol)
            results <- PriceQuote{Provider: p.name, Price: price, Err: err}
        }()
    }

    // Close channel once all goroutines finish
    go func() {
        wg.Wait()
        close(results)
    }()

    // Fan-in: collect all results
    var quotes []PriceQuote
    for q := range results {
        quotes = append(quotes, q)
    }

    return quotes
}

func fetchFromBloomberg(symbol string) (float64, error) {
    time.Sleep(time.Duration(200+rand.Intn(300)) * time.Millisecond)
    return 185.50 + rand.Float64()*2, nil
}

func fetchFromReuters(symbol string) (float64, error) {
    time.Sleep(time.Duration(150+rand.Intn(350)) * time.Millisecond)
    return 185.30 + rand.Float64()*2, nil
}

func fetchFromYahoo(symbol string) (float64, error) {
    time.Sleep(time.Duration(100+rand.Intn(200)) * time.Millisecond)
    return 185.40 + rand.Float64()*2, nil
}
Enter fullscreen mode Exit fullscreen mode

Each provider runs in its own goroutine and pushes its result into the shared results channel. The main function doesn't care which one finishes first -- it just collects everything. The total wait time is the duration of the slowest provider, not the sum of all three. That's the whole point.

When to use this: Aggregating data from multiple independent sources. API composition (calling 3 microservices and merging results). Any situation where you need "all of these things, as fast as possible."

When NOT to use this: When the order of operations matters. If provider B needs the result from provider A, that's not fan-out -- that's a pipeline. Also skip it when sources aren't truly independent. If they share a database connection pool, you might just be moving the bottleneck rather than eliminating it.

Notice how similar this looks to the worker pool. The difference is intent. A worker pool processes a homogeneous batch of tasks (same operation, different data). Fan-out/fan-in runs heterogeneous tasks (different operations, merged results). The plumbing is almost identical -- channels, WaitGroup, close-when-done -- but the use cases are distinct.

Pattern 3: Context Cancellation

Goroutine leaks are the #1 production concurrency bug I see in Go codebases. You launch a goroutine, the caller gives up (HTTP client disconnects, timeout fires), and the goroutine keeps running forever. Multiply by thousands of requests and your service is bleeding memory.

Context cancellation is how you fix this. Here's a realistic example -- an HTTP handler that does background work and properly cancels it when the client disconnects:

package main

import (
    "context"
    "fmt"
    "log"
    "math/rand"
    "net/http"
    "time"
)

func main() {
    http.HandleFunc("/report", handleReport)
    log.Println("listening on :8080")
    log.Fatal(http.ListenAndServe(":8080", nil))
}

func handleReport(w http.ResponseWriter, r *http.Request) {
    // Create a timeout context — 5 seconds max for this request
    ctx, cancel := context.WithTimeout(r.Context(), 5*time.Second)
    defer cancel()

    result, err := generateReport(ctx)
    if err != nil {
        if ctx.Err() == context.DeadlineExceeded {
            http.Error(w, "report generation timed out", http.StatusGatewayTimeout)
            return
        }
        if ctx.Err() == context.Canceled {
            // Client disconnected, no point writing a response
            return
        }
        http.Error(w, "internal error", http.StatusInternalServerError)
        return
    }

    fmt.Fprint(w, result)
}

func generateReport(ctx context.Context) (string, error) {
    steps := []string{"querying database", "aggregating metrics", "building charts"}

    for _, step := range steps {
        log.Printf("report: %s...", step)

        // Simulate work that respects cancellation
        select {
        case <-ctx.Done():
            log.Printf("report: cancelled during %s", step)
            return "", ctx.Err()
        case <-time.After(time.Duration(500+rand.Intn(1500)) * time.Millisecond):
            // Step completed
        }
    }

    return "report: all metrics nominal", nil
}
Enter fullscreen mode Exit fullscreen mode

The key line is r.Context(). Every http.Request carries a context that gets cancelled when the client disconnects. By building your timeout on top of it with context.WithTimeout, you get two cancellation signals for free: the client hanging up, and your own deadline.

Compare this to the version without context cancellation -- where generateReport is just a function that sleeps and returns. If the client disconnects at step 1, the server keeps grinding through steps 2 and 3 for nothing. With thousands of concurrent requests, those orphaned goroutines pile up fast.

The rule: every goroutine you launch should accept a context.Context and check it. Period. Make it the first parameter of any function that does work in a goroutine. This convention is so universal in Go that the standard library follows it everywhere -- database/sql, net/http, os/exec. Follow the convention. Your future self debugging a memory leak at 2 AM will thank you.

When to use this: Always. Every HTTP handler, every background worker, every goroutine that does non-trivial work. This isn't optional in production code.

When NOT to use this: Truly fire-and-forget operations where you genuinely don't care if they complete (audit logs, best-effort metrics). Even then, you probably want a timeout to prevent leaks.

Pattern 4: Select with Timeout

The select statement is Go's multiplexer. It lets a goroutine wait on multiple channel operations at once -- whichever one is ready first wins. This unlocks timeout patterns, graceful shutdowns, and non-blocking coordination.

I go deep on production concurrency patterns in my book, but here's the core of select in action:

package main

import (
    "context"
    "fmt"
    "log"
    "math/rand"
    "os"
    "os/signal"
    "sync"
    "syscall"
    "time"
)

func main() {
    ctx, cancel := context.WithCancel(context.Background())

    // Listen for OS interrupt signals (Ctrl+C)
    sigCh := make(chan os.Signal, 1)
    signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)

    resultCh := make(chan string, 10)

    var wg sync.WaitGroup
    wg.Add(1)
    go func() {
        defer wg.Done()
        processQueue(ctx, resultCh)
    }()

    // Main loop: multiplex between results, signals, and a status ticker
    ticker := time.NewTicker(3 * time.Second)
    defer ticker.Stop()

    for {
        select {
        case result, ok := <-resultCh:
            if !ok {
                fmt.Println("result channel closed, shutting down")
                cancel()
                wg.Wait()
                return
            }
            fmt.Printf("got result: %s\n", result)

        case sig := <-sigCh:
            fmt.Printf("\nreceived %s, starting graceful shutdown...\n", sig)
            cancel()
            wg.Wait()
            fmt.Println("shutdown complete")
            return

        case <-ticker.C:
            log.Println("status: still running")
        }
    }
}

func processQueue(ctx context.Context, results chan<- string) {
    taskID := 0
    for {
        taskID++
        select {
        case <-ctx.Done():
            log.Println("worker: context cancelled, stopping")
            return
        case <-time.After(time.Duration(500+rand.Intn(1000)) * time.Millisecond):
            results <- fmt.Sprintf("task-%d completed", taskID)
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

This single select statement handles three concerns at once: processing results as they arrive, catching OS signals for graceful shutdown, and printing periodic status updates. Without select, you'd need separate goroutines polling each channel -- messy and error-prone.

A few things to notice. The select blocks until one case is ready. If multiple cases are ready at the same time, Go picks one at random -- there's no priority ordering. The ok check on resultCh detects when the channel has been closed, which is a clean way to signal "no more work." And the ticker.C case gives you a heartbeat without spinning in a loop.

When to use this: Timeout patterns (wrap any operation with a time.After case). Graceful shutdown (listen for OS signals alongside your work). Multiplexing results from multiple channels. Any coordination where you're waiting for "whichever of these things happens first."

When NOT to use this: Don't use select with a single case -- that's just a channel receive with extra syntax. A note on time.After: older guides warn against using it in loops because it used to leak memory (each call allocated a timer that wouldn't be garbage collected until it fired). Since Go 1.23, this is fixed -- unused time.After timers are garbage collected properly. It's still slightly less efficient than reusing a time.NewTimer with Reset, but it's no longer a memory leak.

The Golden Rule

"If you're not sure whether you need concurrency, you don't."

Don't reach for goroutines to make code "faster." Concurrent code is harder to read, harder to debug, and harder to get right. Reach for goroutines when you have genuinely independent work -- when task B doesn't need the result of task A, and you're waiting on something (network, disk, external service) that would otherwise block everything else.

A sequential program that's correct and easy to understand will beat a concurrent one that's fast but full of subtle race conditions every single time. Start sequentially. Profile. Add concurrency where the data tells you it matters, using one of these four patterns.

And when you do add concurrency, always run your tests with -race. The Go race detector catches data races at runtime and it's saved me more times than I can count. Add it to your CI pipeline. It's not optional.


Want to go deeper?

I wrote a book that covers everything in this series -- and a lot more: error handling patterns, testing strategies, production deployment, and the stuff you only learn after shipping Go to production.

Available in:

Top comments (0)