I used to think sync.Mutex was the only way to make Go concurrency safe, until I had to trace a performance cliff in a high-throughput websocket server. Turns out, parking and waking goroutines has a cost. Atomic operations in Go let you bypass the scheduler entirely and lean directly on the CPU for lock-free programming.
The hardware behind atomic operations in Go
You already know counter++ is actually a read, an add, and a write. If two goroutines do it at once, you lose an increment. Data race.
Atomics fix this using CPU-level instructions (like LOCK XADD on x86). The CPU locks the memory bus for that specific address just long enough to do the read-modify-write. No scheduler, no kernel context switch. It just happens.
One of the most frustrating things about the sync/atomic package pre-Go 1.19 was the lack of type safety. You'd pass pointers to int64 and hope you didn't accidentally pass a regular int or a 32-bit integer on a 32-bit architecture and panic at runtime. The typed API fixed this, mostly.
The modern API
Go 1.19 gave us typed wrappers. I almost never use the raw atomic.AddInt64 functions anymore. The typed API is just cleaner and prevents stupid pointer mistakes.
var counter atomic.Int64
counter.Add(1)
counter.Store(10)
counter.Load() // 10
var ptr atomic.Pointer[Config]
ptr.Store(&Config{Timeout: 5})
The full set of wrappers available now:
| Type | Description |
|---|---|
atomic.Int32 / atomic.Int64
|
Signed integer counters |
atomic.Uint32 / atomic.Uint64
|
Unsigned integer counters |
atomic.Uintptr |
Unsigned integer pointer types |
atomic.Bool |
Boolean flag |
atomic.Pointer[T] |
Generic pointer to any type |
atomic.Value |
Any type, as long as the concrete type is consistent |
Which brings me to atomic.Value
I assumed atomic.Value was just a generic bucket, but it has a nasty trap: the concrete type you store the first time locks it in forever.
var v atomic.Value
v.Store(map[string]int{"a": 1})
// This panics!
v.Store(struct{ Name string }{"bad"})
The documentation mentions this, but it's easy to miss until your app crashes in production because someone tried to store a nil error interface when it previously held a concrete error type. I've been bitten by this exact thing. If you know the type upfront, atomic.Pointer[T] is infinitely better.
A practical look at CAS loops
Compare-and-Swap (CAS) is the weirdest pattern if you're exploring mutex alternatives. You don't lock, you try to update, and if someone else beat you to it, you loop and try again.
Here's how I actually use it for rate limiting:
type RateLimiter struct {
count atomic.Int64
maxRPS int64
}
func (r *RateLimiter) Allow() bool {
for {
current := r.count.Load()
if current >= r.maxRPS {
return false
}
// Did anyone change count since we loaded it?
if r.count.CompareAndSwap(current, current+1) {
return true
}
// Yes, they did. Loop and try again.
}
}
This blew my mind the first time I wrote it. The loop isn't spinning endlessly; it only retries if there's actual contention at that exact nanosecond.
Config hot-reloads without pausing
Replacing a config struct while the app is running is where atomic.Pointer[T] shines. A sync.RWMutex works, but every reader has to interact with the lock state.
type Server struct {
config atomic.Pointer[Config]
}
func (s *Server) UpdateConfig(cfg *Config) {
s.config.Store(cfg)
}
func (s *Server) handleRequest() {
cfg := s.config.Load() // Always gets a complete, consistent config
_ = cfg.Timeout
}
I think this might just be my favorite use case. You do all the heavy lifting of parsing and validating the new config off to the side, and the actual update is a single atomic pointer swap. No readers block. No partial reads.
When to just use a Mutex
If your update touches more than one variable, or involves complex conditional logic spanning multiple fields, stop trying to be clever.
I spent two days trying to coordinate three atomic.Int64 counters to track queue states without locks. It was a buggy, unreadable mess. A simple sync.Mutex solved it in five minutes. Atomics are for single, isolated values. If state A depends on state B, wrap them in a mutex.
Top comments (0)