DEV Community

Sourav Mansingh
Sourav Mansingh

Posted on • Originally published at souravsnigdha.Medium

The Hidden Dangers Lurking in Your Concurrent Code ( And How Go Helps You Survive)

A journey through the beautiful chaos of concurrent programming.

You’re sitting at your desk at 3 AM, staring at a bug that only appears in production. Sometimes. Maybe once a week. Your code works perfectly on your machine. The tests pass. But somewhere, somehow, in the wild chaos of real-world usage, your program occasionally produces the wrong result. Welcome to the maddening world of concurrency bugs.

The Deceptive Simplicity

Let’s start with something that looks innocent enough:

var data int

go func(){
   data++
}()

if data == 0 {
    fmt.Printf("the value is %v.\n", data)
}
Enter fullscreen mode Exit fullscreen mode

Quick quiz: What does this print?
If you answered “nothing” or “the value is 0”, you are partially right. If you answered “the value is 1”, you’re also partially right. The disturbing truth? All three outcomes are possible:

  1. Nothing prints (line 3 executes before line 5)
  2. “the value is 0” prints (lines 5–6 execute before line 3)
  3. “the value is 1” prints (line 5 executes before line 3, but line 3 executes before line 6 )

This is a race condition, and it’s one of the most insidious bugs you’ll ever encounter. Why? Because it might work correctly 99.9% of the time. Your tests pass. Your code reviews don’t catch it. Then, two years into production, when your system is under unprecedented load, everything breaks.

The Seductive “Fix” That Isn’t

Here’s what many developers try first:

var data int
go func() { data++ }()
time.Sleep(1*time.Second) // "This should fix it!"
if data == 0 {
  fmt.Printf("the value is %v.\n",data)
}
Enter fullscreen mode Exit fullscreen mode

Does this solve the problem?No. You’ve just made the bug less likely to appear. You’re still playing Russian roulette with your program’s correctness — you’ve just added more empty chambers. Plus, you’ve introduced a one-second delay to make your program almost correct. That’s like putting a Band-Aid on a broken bone.

When One Line of Code Isn’t One Operation
Pop quiz #2: How many operations happen in this line ?

i++
Enter fullscreen mode Exit fullscreen mode

If you said “one”, you’re thinking like a human. Computers think differently:

  1. Retrieve the value of i
  2. Increment the value
  3. Store the value back to i Three separate operations. And here’s the kicker: combining atomic operations doesn’t create a larger atomic operation. Between any of these steps, another goroutine can swoop in and read or modify i, creating subtle corruption in your data.

The Deadlock Dance

Race conditions are bad, but at least your program keeps running (incorrectly). Deadlocks are worse — they’re the death of your program. Let me show you how easy it is to write one:

type value struct {
    mu sync.Mutex
    value int
}

printSum := func(v1, v2 *value) {
    v1.mu.Lock()
    defer v1.mu.Unlock()

    time.Sleep(2*time.Second)  // Simulate work

    v2.mu.Lock()
    defer v2.mu.Unlock()

    fmt.Printf("sum=%v\n",v1.value + v2.value)
}

var a, b value
go printSum(&a, &b)
go printSum(&b, &a)
Enter fullscreen mode Exit fullscreen mode

Can you spot the problem? The first gorountine locks a , then tries to lock b. The second goroutine locks b, then tries to lock a. They’re now frozen, waiting for each other forever. Your program is dead.

This is like two people in a hallway, both trying to be polite by stepping to the same side, then both stepping back, then both stepping to the other side.. forever.

The Hallway of Eternal Politeness (Livelock)

Speaking of hallways, let’s code that awkward dance:

var left, right int32

tryLeft := func() bool {
     atomic.AddInt32(&left, 1)
     defer atomic.AddInt32(&left, -1)

     if atomic.LoadInt32(&left) == 1 {
       return true    // Success
     }
      return false  // Someone else is trying too
}

tryRight := func() bool {
    atomic.AddInt32(&right, 1)
    defer atomic.AddInt32(&right, -1)

    if atomic.LoadInt32(&right) == 1 {
      return true
    }
    return false
}

// Person1 and Person2 try to pass each other
for i := 0; i < 5; i++ {
  if tryLeft() || tryRight() {
      return    // Made it through!
  } 
  // Both failed, try again
}
// Give up in frustration
Enter fullscreen mode Exit fullscreen mode

This is a livelock. Unlike a deadlock, your CPU is working hard, burning cycles. But nothing productive is happening. It’s the concurrency equivalent of running on a treadmill.

The Greedy and the Starved

Here’s a more subtle problem — starvation:

var sharedLock sync.Mutex

greedyWorker := func() {
    var count int
    for begin := time.Now(); time.Since(begin) <= 1*time.Second; {
        sharedLock.Lock()
        time.Sleep(3*time.Nanosecond) // Do work
        sharedLock.Unlock()
        count++
    }
    fmt.Printf("Greedy worker: %v loops\n",count)
}

politeWorker := func() {
     var count int
     for begin := time.Now(); time.Since(begin) <= 1*time.Second; {
         sharedLock.Lock()
         time.Sleep(1*time.Nanosecond)
         sharedLock.Unlock()

         sharedLock.Lock()
         time.Sleep(1*time.Nanosecond)
         sharedLock.Unlock()

         sharedLock.Lock()
         time.Sleep(1*time.Nanosecond)
         sharedLock.Unlock()

         count++
     }
     fmt.Printf("Polite worker: %v loops\n", count)
}

go greedyWorker()
go politeWorker()
Enter fullscreen mode Exit fullscreen mode

Output:

Greedy worker: 471,287 loops
Polite worker: 289,777 loops
Enter fullscreen mode Exit fullscreen mode

Same amount of work, but the greedy worker gets nearly twice as much done! The polite worker is being starved of resources. This is the concurrency equivalent of one person hogging the microphone at a meeting while others wait patiently for their turn that never comes.

The Horror of Uncertainty

Here’s what makes all of this truly terrifying: you can’t reason about concurrent code the way you reason about sequential code. Look at this function signature:

func CalculatePi(begin, end int64, pi *Pi)
Enter fullscreen mode Exit fullscreen mode

Questions immediately arise:

  • Is this function thread-safe?
  • Should I call it concurrently myself?
  • Who handles synchronization ?
  • Is the Pi type safe for concurrent access?

Without documentation or reading the implementation, you have no idea. And this is just one function. Scale this to a large codebase, and you see the problem.

How Go Fights Back

The good news? Go gives you powerful tools to tame this chaos:

  1. Channels for safe communication:
func CalculatePi(begin, end int64) <-chan uint {
    result := make(chan uint)
    go func() {
        // Calculate and send results
        result <- calculation
        close(result)
    }()
    return result
}
Enter fullscreen mode Exit fullscreen mode

The signature itself tells you: “This function handles concurrency internally. Just read from the channel.”

  1. Clear patterns for synchronizarion:
var memoryAccess sync.Mutex
var value int

go func() {
    memoryAccess.Lock()
    value++
    memoryAccess.Unlock()
}()

memoryAccess.Lock()
if value == 0 {
    fmt.Printf("the value is %v.\n", value)
}
memoryAccess.Unlock()
Enter fullscreen mode Exit fullscreen mode
  1. A runtime that does the heavy lifting:
  2. Automatic goroutine management (no manual thread pools)
  3. Garbage collection pauses under 100 microseconds
  4. Smart scheduling across CPU cores

The Path Forward

Concurrency will always be hard. The fundamental problems — race conditions, deadlocks, livelocks, and starvation — aren’t going away. But Go makes it manageable. It gives you:

  1. Primitives that encourage correct patterns (channels, goroutines)
  2. A runtime that handles complexity (scheduling, memory management)
  3. Tools to catch bugs (race detector, profiler)

The key is understanding these dangers exist and respecting them. Every time you add concurrency to your code, you’re making a trade: complexity for performance. Make that trade consciously, document your decisions, and test thoroughly.

Because that 3 AM bug hunt? The one that only happens in production? That’s not where you want to be learning these lessons.

Key Takeaways

  • Race conditions are silent killers that can hide for years
  • Don’t use time.Sleep() to "fix" concurrency bugs—fix the logic
  • Atomicity is context-dependent — something atomic in one scope isn’t necessarily atomic in another
  • Deadlocks need four conditions (Coffman Conditions) — break one to prevent them
  • Livelocks waste CPU while making no progress
  • Starvation happens when greedy processes hog resources
  • Go’s concurrency primitives make correct code easier to write
  • Document your concurrency assumptions — future you will thank present you

Top comments (0)