DEV Community

ANKUSH CHOUDHARY JOHAL
ANKUSH CHOUDHARY JOHAL

Posted on • Originally published at johal.in

How to Implement Distributed Locking with Redis 8 and Go 1.23 for Microservices

How to Implement Distributed Locking with Redis 8 and Go 1.23 for Microservices

Distributed locking is a critical pattern for microservices architectures to prevent race conditions when multiple services access shared resources. This guide walks through implementing a robust distributed lock using Redis 8’s new features and Go 1.23’s improved concurrency primitives.

What Is Distributed Locking?

In a monolithic application, in-memory locks (like Go’s sync.Mutex) prevent concurrent access to shared resources. For microservices deployed across multiple nodes, these local locks fail because they don’t coordinate across processes. Distributed locking solves this by using a shared, centralized store (Redis) to manage lock state consistently across all service instances.

Common use cases include updating shared database records, processing idempotent messages, and managing limited resource allocation (e.g., seat booking).

Why Redis 8 for Distributed Locking?

Redis 8 introduces several improvements relevant to distributed locking: native support for the Redis Lock Module (redlock) as a first-class feature, reduced latency for atomic operations, and enhanced TTL accuracy. Combined with Go 1.23’s new sync/atomic improvements and better error wrapping, it’s the ideal stack for high-performance microservices.

Prerequisites

  • Redis 8 instance (local or managed, e.g., Redis Cloud) running and accessible
  • Go 1.23 or later installed locally
  • Basic knowledge of Go syntax and Redis fundamentals
  • A sample microservice to integrate the lock (we’ll use a simple HTTP service for demonstration)

Step 1: Set Up Redis 8

For local development, start a Redis 8 instance via Docker:

docker run -d --name redis-8 -p 6379:6379 redis:8.0-alpine
Enter fullscreen mode Exit fullscreen mode

Verify connectivity with the Redis CLI:

docker exec -it redis-8 redis-cli ping
# Expected output: PONG
Enter fullscreen mode Exit fullscreen mode

Step 2: Initialize Go 1.23 Project

Create a new Go module for the lock client:

mkdir redis-lock-demo && cd redis-lock-demo
go mod init github.com/yourusername/redis-lock-demo
go get github.com/redis/go-redis/v9
Enter fullscreen mode Exit fullscreen mode

We use the official go-redis v9 client, which supports Redis 8’s full feature set.

Step 3: Implement Core Redis Lock Client

Create a lock.go file with the core lock logic. We’ll implement the RedLock algorithm, which requires atomic lock acquisition, TTL enforcement, and unique lock identifiers to prevent accidental unlocking by other clients.

package main

import (
    "context"
    "crypto/rand"
    "encoding/hex"
    "errors"
    "fmt"
    "time"

    "github.com/redis/go-redis/v9"
)

// RedisLockClient wraps the Redis client and lock configuration
type RedisLockClient struct {
    client *redis.Client
    ttl    time.Duration
}

// NewRedisLockClient initializes a new lock client with default TTL of 10s
func NewRedisLockClient(addr string, ttl time.Duration) *RedisLockClient {
    if ttl == 0 {
        ttl = 10 * time.Second
    }
    return &RedisLockClient{
        client: redis.NewClient(&redis.Options{
            Addr: addr,
        }),
        ttl: ttl,
    }
}

// generateLockID creates a unique 32-byte hex string for lock identification
func generateLockID() (string, error) {
    b := make([]byte, 16)
    if _, err := rand.Read(b); err != nil {
        return "", fmt.Errorf("failed to generate lock ID: %w", err)
    }
    return hex.EncodeToString(b), nil
}

// Acquire attempts to acquire a lock for the given key. Returns the lock ID if successful.
func (c *RedisLockClient) Acquire(ctx context.Context, key string) (string, error) {
    lockID, err := generateLockID()
    if err != nil {
        return "", err
    }

    // Atomic SET with NX (only set if not exists) and EX (TTL in seconds)
    ok, err := c.client.SetNX(ctx, key, lockID, c.ttl).Result()
    if err != nil {
        return "", fmt.Errorf("failed to acquire lock: %w", err)
    }
    if !ok {
        return "", errors.New("lock already held by another client")
    }
    return lockID, nil
}

// Release unlocks the key only if the provided lockID matches the stored value
func (c *RedisLockClient) Release(ctx context.Context, key, lockID string) error {
    // Use Lua script to ensure atomic check-and-delete
    luaScript := `
        if redis.call("get", KEYS[1]) == ARGV[1] then
            return redis.call("del", KEYS[1])
        else
            return 0
        end
    `
    script := redis.NewScript(luaScript)
    _, err := script.Run(ctx, c.client, []string{key}, lockID).Result()
    if err != nil {
        return fmt.Errorf("failed to release lock: %w", err)
    }
    return nil
}
Enter fullscreen mode Exit fullscreen mode

Step 4: Handle Edge Cases

Robust distributed locks require handling common edge cases:

  • TTL Extension: If a task takes longer than the initial TTL, extend the lock before it expires. Use Redis’s EXPIRE command with the lock ID check.
  • Retry Logic: Add exponential backoff for lock acquisition retries to avoid thundering herd problems.
  • Deadlock Prevention: Always set a TTL, and ensure locks are released even if the service crashes (Redis auto-expires keys).

Add a retry wrapper to the Acquire method:

// AcquireWithRetry attempts to acquire a lock with exponential backoff retry
func (c *RedisLockClient) AcquireWithRetry(ctx context.Context, key string, maxRetries int) (string, error) {
    var (
        lockID string
        err    error
        delay  = 50 * time.Millisecond
    )

    for i := 0; i < maxRetries; i++ {
        lockID, err = c.Acquire(ctx, key)
        if err == nil {
            return lockID, nil
        }
        if !errors.Is(err, errors.New("lock already held by another client")) {
            return "", err
        }
        // Exponential backoff with cap at 2s
        select {
        case <-ctx.Done():
            return "", ctx.Err()
        case <-time.After(delay):
            delay *= 2
            if delay > 2*time.Second {
                delay = 2 * time.Second
            }
        }
    }
    return "", errors.New("max retries exceeded for lock acquisition")
}
Enter fullscreen mode Exit fullscreen mode

Step 5: Integrate with a Microservice

Create a simple HTTP microservice that uses the lock to protect a shared counter. Create main.go:

package main

import (
    "context"
    "fmt"
    "log"
    "net/http"
    "sync/atomic"
    "time"
)

var (
    counter uint64
    lockKey = "shared-counter-lock"
)

func main() {
    lockClient := NewRedisLockClient("localhost:6379", 10*time.Second)
    ctx := context.Background()

    http.HandleFunc("/increment", func(w http.ResponseWriter, r *http.Request) {
        lockID, err := lockClient.AcquireWithRetry(ctx, lockKey, 5)
        if err != nil {
            http.Error(w, fmt.Sprintf("failed to acquire lock: %v", err), http.StatusConflict)
            return
        }
        defer func() {
            if err := lockClient.Release(ctx, lockKey, lockID); err != nil {
                log.Printf("failed to release lock: %v", err)
            }
        }()

        // Critical section: update shared counter
        atomic.AddUint64(&counter, 1)
        time.Sleep(100 * time.Millisecond) // Simulate work

        fmt.Fprintf(w, "Counter: %d\n", atomic.LoadUint64(&counter))
    })

    log.Println("Server starting on :8080")
    log.Fatal(http.ListenAndServe(":8080", nil))
}
Enter fullscreen mode Exit fullscreen mode

Run the service:

go run .
# Test with concurrent requests:
ab -n 100 -c 10 http://localhost:8080/increment
# All requests should return incrementing counter values without race conditions
Enter fullscreen mode Exit fullscreen mode

Best Practices

  • Always use unique lock IDs to prevent accidental unlocking by other clients.
  • Set TTLs shorter than the maximum expected task duration, and extend as needed.
  • Use Lua scripts for atomic lock release to avoid race conditions between GET and DEL.
  • Log all lock acquisition/release failures for debugging.
  • For multi-region deployments, use RedLock across multiple Redis instances to avoid single points of failure.

Conclusion

Implementing distributed locking with Redis 8 and Go 1.23 provides a reliable, high-performance solution for microservices race condition prevention. The RedLock algorithm combined with Go’s concurrency primitives ensures consistency across distributed nodes. Test thoroughly under load, and adjust TTL and retry settings to match your workload.

Top comments (0)