DEV Community

Kittipat.po
Kittipat.po

Posted on

Prevent Race Conditions in Go Microservices with Distributed Locks

Distributed Locks

Introduction

In distributed systems, coordinating access to shared resources—such as rows in a database, files, or operations like seat reservation or payment processing—can be challenging when multiple services are involved. This is where distributed locks come in.

In this post, we’ll cover:

  • What distributed locks are
  • Why and when you need them
  • Common pitfalls
  • A practical example in Go using Redsync
  • How to use the lockmanager from the go-common library to simplify implementation

What is a Distributed Lock?

A distributed lock ensures that multiple nodes in a system do not simultaneously perform conflicting operations on shared resources. It’s the distributed equivalent of a mutex, but across processes and machines.

Common Use Cases

  • Preventing double booking in a ticketing system
  • Ensuring only one worker processes a message from a queue
  • Serializing access to a critical section of code across pods or services

Redsync: Redis-based Locking 🔒

Redsync is a Go implementation of the Redlock algorithm, using Redis as the coordination backend. It’s simple, reliable, and battle-tested in production.

🔧 Example: Using Redsync

package main

import (
    goredislib "github.com/redis/go-redis/v9"
    "github.com/go-redsync/redsync/v4"
    "github.com/go-redsync/redsync/v4/redis/goredis/v9"
)

func main() {
    // Create a pool with go-redis (or redigo) which is the pool redisync will
    // use while communicating with Redis. This can also be any pool that
    // implements the `redis.Pool` interface.
    client := goredislib.NewClient(&goredislib.Options{
        Addr: "localhost:6379",
    })
    pool := goredis.NewPool(client) // or, pool := redigo.NewPool(...)

    // Create an instance of redisync to be used to obtain a mutual exclusion
    // lock.
    rs := redsync.New(pool)

    // Obtain a new mutex by using the same name for all instances wanting the
    // same lock.
    mutexname := "my-global-mutex"
    mutex := rs.NewMutex(mutexname)

    // Obtain a lock for our given mutex. After this is successful, no one else
    // can obtain the same lock (the same mutex name) until we unlock it.
    if err := mutex.Lock(); err != nil {
        panic(err)
    }

    // Do your work that requires the lock.

    // Release the lock so other processes or threads can obtain a lock.
    if ok, err := mutex.Unlock(); !ok || err != nil {
        panic("unlock failed")
    }
}
Enter fullscreen mode Exit fullscreen mode

🤝 Simplifying with go-common LockManager

While Redsync is powerful, you often end up repeating boilerplate logic: setting up Redis clients, creating mutexes with consistent options, generating unique tokens and managing error types.

To simplify this, the lockmanager package from the go-common library provides a clean abstraction over Redsync with:

  • ✅ A standard LockManager interface
  • ⚙️ Pluggable token generator and retry logic
  • 🧪 Easy mocking for unit testing

💡 Example: Using go-common LockManager

package main

import (
    "context"
    "fmt"
    "log"
    "time"

    "github.com/go-redsync/redsync/v4"
    "github.com/redis/go-redis/v9"

    redsyncLocker "github.com/kittipat1413/go-common/framework/lockmanager/redsync"
)

func main() {
    // Initialize Redis client
    client := redis.NewClient(&redis.Options{
        Addr: "localhost:6379",
    })
    defer client.Close()

    // Create LockManager with optional token generator and redsync config
    locker := redsyncLocker.NewRedsyncLockManager(
        client,
        redsyncLocker.WithTokenGenerator(func(key string) string {
            return "token-for:" + key
        }),
        redsyncLocker.WithRedsyncOptions(
            redsync.WithTries(3),
            redsync.WithRetryDelay(100*time.Millisecond),
        ),
    )

    ctx := context.Background()
    lockKey := "demo-lock-key"
    ttl := 2 * time.Second

    // Acquire lock
    token, err := locker.Acquire(ctx, lockKey, ttl)
    if err != nil {
        log.Fatalf("❌ Failed to acquire lock: %v", err)
    }
    fmt.Printf("✅ Lock acquired with token: %s\n", token)

    // Attempt to acquire the same lock again with a different token
    _, err = locker.Acquire(ctx, lockKey, ttl, "another-token")
    if err != nil {
        fmt.Printf("⛔ Re-acquire failed as expected: %v\n", err)
    }

    // Release lock
    if err := locker.Release(ctx, lockKey, token); err != nil {
        log.Fatalf("❌ Failed to release lock: %v", err)
    }
    fmt.Println("🔓 Lock released successfully")

    // Attempt releasing again (should be a no-op or handled gracefully)
    if err := locker.Release(ctx, lockKey, token); err != nil {
        fmt.Printf("🔁 Second release failed gracefully: %v\n", err)
    } else {
        fmt.Println("⚠️ Released again without error (already expired)")
    }
}
Enter fullscreen mode Exit fullscreen mode

Pitfalls to Avoid ⚠️

While distributed locks can be powerful, they come with caveats:

  • Short TTLs may expire before the critical section is done, leading to unintended parallel execution.
  • Long TTLs may block progress if a node crashes without releasing the lock.
  • Don’t use locks as permanent ownership — they’re for coordination, not persistent state.
  • Always wrap lock usage with context timeouts or deadlines to avoid deadlocks.

🧠 Pro Tip: Design your system to recover gracefully even if the lock fails or expires unexpectedly.

Conclusion 🎯

Distributed locks are a foundational building block in microservices and distributed architectures. Whether you’re managing ticket availability, serializing task execution, or controlling access to shared state, having a reliable and testable lock mechanism makes a huge difference.

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.