DEV Community

Cover image for How To Synchronize threads In Go.
Sk
Sk

Posted on

How To Synchronize threads In Go.

Single-threaded code already brings headaches. Add a second thread, it's a graduation from a basic headache.

The fix? Mutexes: traffic cops for your threads and data.
Once you understand them, thread sync becomes second nature, language agnostic.

Working in both C++ and Go, I’ve run into all the usual chaos:

  • Race conditions that sometimes swallow data
  • Segfaults from threads trampling memory
  • And the silent killer: deadlocks

That last one’s the worst, no crash, no error. Just a dead program, stuck in an eternal thread standoff.

But it all starts to click when you get the core idea behind a mutex.

The best part? Every language speaks mutex:

  • Go → sync.Mutex
  • C++ → std::mutex
  • Python → threading.Lock()
  • Java → ReentrantLock

In this post, I’ll break down mutexes as a concept, show you how deadlocks happen, and leave you with enough intuition to handle threaded code in any language.

Learn once → apply everywhere.

Mutexes: Mutual Exclusion Lock

Threads introduce a whole new category of problems, especially in Go, where spawning thousands is practically free.

Now imagine two threads hitting the same data source at the exact same time. That’s chaos. Race conditions, data corruption, mystery bugs, things you don’t want to debug, let alone explain to your team.

Enter mutexes: the traffic cops between your threads and shared data.

Without a lock:

thread A  --->     data source  <--- thread B
Enter fullscreen mode Exit fullscreen mode

With a lock (shared between both threads):

thread A  [lock]--->     data source  <---[lock] thread B
Enter fullscreen mode Exit fullscreen mode

The mutex’s job is simple: only one thread enters at a time.
If thread A owns the lock, thread B gets told: "Wait your turn."

Here’s a simple example of slice access with and without locks:

Without locks:

package main

import (
    "fmt"
    "time"
)

func main() {
    var numbers []int

    // Spin up 5 goroutines that all append to the same slice.
    for i := 0; i < 5; i++ {
        go func(n int) {
            // No locking here,this will likely cause a data race
            numbers = append(numbers, n)
            fmt.Println("Appended", n, "→", numbers)
        }(i)
    }

    // Give them a moment to run
    time.Sleep(1 * time.Second)
}

Enter fullscreen mode Exit fullscreen mode

With locks:

package main

import (
    "fmt"
    "sync"
    "time"
)

func main() {
    var (
        numbers []int
        mu      sync.Mutex
    )

    for i := 0; i < 5; i++ {
        go func(n int) {
            mu.Lock()           // acquire the lock
            defer mu.Unlock()   // ensure it’s released, even on panic

            numbers = append(numbers, n)
            fmt.Println("Appended", n, "→", numbers)
        }(i)
    }

    time.Sleep(1 * time.Second)
}

Enter fullscreen mode Exit fullscreen mode

Notice how we do:

mu.Lock()
defer mu.Unlock()
Enter fullscreen mode Exit fullscreen mode

The defer guarantees that no matter how we exit that goroutine, normal return or panic, the lock will be released.

Once a goroutine touches shared data, lock it down. Trust me, future you will be grateful.


So, what exactly is a deadlock?


Deadlocks

Back to our traffic cop analogy:

thread A  [lock]--->     data source  <---[lock] thread B
Enter fullscreen mode Exit fullscreen mode

This works because one shared lock controls access. But what happens when we introduce two shared locks in the same lane?

thread A  [lock]--[lock]->     data source  <---[lock] thread B
Enter fullscreen mode Exit fullscreen mode

Now you’ve got two traffic cops, and neither knows who’s in charge. Thread A gets stuck waiting on both, forever ping-ponging in confusion. That’s a classic deadlock.

The usual suspect? Same Nested locks, calling a function that acquires a lock from within another function that’s already holding it.

Here’s a real-world example:

func (m *ScheduledTask) Create(...) (task, error) {
    m.mu.Lock()                // LOCK 1
    defer m.mu.Unlock()        // UNLOCK 1 at the end

    // ... setup task ...

    if err := m.saveTasks(); err != nil {  // LOCK 2 inside
        return task{}, err
    }

    return t, nil
}
Enter fullscreen mode Exit fullscreen mode

Now look inside saveTasks:

func (m *ScheduledTask) saveTasks() error {
    m.mu.Lock()               // LOCK 2 (again)
    defer m.mu.Unlock()

    data, err := json.MarshalIndent(m.tasks, "", "  ")
    if err != nil {
        return err
    }

    return os.WriteFile(tasks, data, 0644)
}
Enter fullscreen mode Exit fullscreen mode

Deadlock.
Why? Because Create() already holds the lock, and saveTasks() tries to acquire it again, before the first one is released. Go routines don’t complain, they just silently freeze. No crash, no stack trace, just a zombie thread eating resources.

And the main thread? Blissfully unaware. Keeps running while your program hangs in limbo.


If you’re serious about building real-world software, you need to understand synchronization.

The concepts apply across languages. Here's the C++ version:

std::lock_guard<std::mutex> lk(globalIPCData.mapMutex);  // locking before access
UIelement& u = uiSet.get(entityId);
Enter fullscreen mode Exit fullscreen mode

Learn this well.

Once you see mutexes as traffic cops with absolute authority, most thread issues just vanish.

I’ll be posting more deep dives on backend topics,JavaScript, Golang, C++, and low-level systems on Substack. Would love to have you there; come say hi:

Coffee & Kernels

X

More Content:

Guide To Deep Learning

The Hitchhiker's Guide to Deep Learning: Python and JS examples.

Deep Learning : Pytorch and Tensorflow.js Examples

favicon skdev.substack.com

How To Suck Less At Databases

How To Suck Less At Databases and Data Systems with JavaScript Examples.

Databases, Data Systems and The Language of Intent.

favicon skdev.substack.com

Top comments (7)

Collapse
 
sfundomhlungu profile image
Sk

Learn once → apply everywhere. Here are examples for C++, Python, Java (similar concept):

C++ → std::mutex

#include <iostream>
#include <mutex>
#include <thread>
#include <vector>

int main() {
    int counter = 0;
    std::mutex m;
    std::vector<std::thread> threads;

    for (int i = 0; i < 5; ++i) {
        threads.emplace_back([&](){
            std::lock_guard<std::mutex> lock(m); // RAII: lock on ctor, unlock on dtor
            ++counter;                            // critical section
            std::cout << "C++ counter: " << counter << "\n";
        });
    }

    for (auto &t : threads) t.join();
    return 0;
}
Enter fullscreen mode Exit fullscreen mode

Python → threading.Lock()

import threading

counter = 0
lock = threading.Lock()

def worker():
    global counter
    with lock:                # context‑manager acquires & releases
        counter += 1          # critical section
        print(f"Python counter: {counter}")

threads = []
for _ in range(5):
    t = threading.Thread(target=worker)
    t.start()
    threads.append(t)

for t in threads:
    t.join()
Enter fullscreen mode Exit fullscreen mode

Java → ReentrantLock

import java.util.concurrent.locks.ReentrantLock;

public class Main {
    private static int counter = 0;
    private static final ReentrantLock lock = new ReentrantLock();

    public static void main(String[] args) throws InterruptedException {
        Thread[] threads = new Thread[5];

        for (int i = 0; i < 5; i++) {
            threads[i] = new Thread(() -> {
                lock.lock();     // 🔒 acquire
                try {
                    counter++;   // critical section
                    System.out.println("Java counter: " + counter);
                } finally {
                    lock.unlock(); // 🔓 release
                }
            });
            threads[i].start();
        }

        for (Thread t : threads) t.join();
    }
}
Enter fullscreen mode Exit fullscreen mode

🔑 Key takeaway

No matter the language, the recipe is:

  1. Acquire the lock/mutex before touching shared data
  2. Do your minimal critical work
  3. Release the lock/mutex (or use a scope/RAII/context to auto‑release)
Collapse
 
parag_nandy_roy profile image
Parag Nandy Roy

Every beginner needs this...

Collapse
 
sfundomhlungu profile image
Sk

I totally agree, use to trip me a lot in the past!

Collapse
 
nathan_tarbert profile image
Nathan Tarbert

This is extremely impressive, it covers all the stuff that trips me up with threads in a way I actually get

Collapse
 
sfundomhlungu profile image
Sk

I am glad you found it useful! 🔥

Collapse
 
dotallio profile image
Dotallio

The deadlock explanation hit home, those are brutal to debug. Do you have a go-to trick for hunting them down in bigger codebases?

Collapse
 
sfundomhlungu profile image
Sk

A quick trick is to dump all goroutine stack traces at runtime (e.g., with runtime.Stack or go tool pprof) and look for goroutines stuck on Lock calls to pinpoint exactly which locks are waiting on each other.