DEV Community

Ali Rıza Aynacı
Ali Rıza Aynacı

Posted on

Scaling with Redis Sharding in Go: A Practical Guide with Hashing Strategies

🧠 Introduction

When I was building RLaaS (Rate Limiting as a Service), something critical dawned on me.

My project is built for developers to embed into their own systems. But those systems have their own users — potentially thousands. Even if I serve just 100 developer clients, and each of them has 100 users, that’s 10,000+ requests per second hitting my infrastructure.

That number can grow exponentially, and fast.

This meant one thing:

I couldn’t rely on a single Redis node to handle this load. I had to scale — horizontally.

The logical choice? Sharding — the practice of splitting data across multiple Redis nodes. This article walks you through how I architected a scalable Redis shard selector in Go — and how you can do the same.

We’ll cover:

  • Why Redis needs to be sharded in scale-prone architectures
  • Trade-offs between hash_mod and consistent_hash
  • Implementation in idiomatic Go with modular design
  • Reusability across multiple backend components (not just rate limiters)

🚨 Redis Limitations at Scale

Redis is a go-to for speed, but it's a single-threaded, memory-bound system. While it’s perfect for early-stage workloads, it hits hard limits when scaling:

  • All keys live in RAM
  • One Redis process runs on a single core
  • Write contention increases rapidly
  • No native horizontal scaling (unless you use Redis Cluster)

This is especially painful for systems like RLaaS, where thousands of keys are being written and read per second.

The answer? Sharding.


🧩 What Is Redis Sharding?

Sharding means splitting your keyspace across multiple Redis instances. Each instance becomes responsible for a portion of your data.

This can:

  • Spread the load evenly
  • Prevent single-node failure from cascading
  • Let you scale capacity by adding nodes

But to make this work, you need a shard selection strategy — logic that determines which node a given key should go to.


🎯 Sharding Strategies in Practice

There are two main techniques to pick from:

1. 🧮 hash_mod

The simplest and fastest option.

func (s *ShardSelector) hashModSelect(key string) string {
    hash := s.hash32(key)
    index := hash % uint32(len(s.nodes))
    return s.nodes[index]
}
Enter fullscreen mode Exit fullscreen mode

How it works:

  • Hash the key
  • Modulo it by the number of Redis nodes
  • Pick that node

✅ Pros:

  • Super fast
  • Very simple to write

❌ Cons:

  • Adding/removing nodes breaks key mappings
  • Rehashes almost everything

Good for static clusters. Not ideal for dynamic or elastic systems.


2. 🔁 consistent_hash

This method is cluster-friendly and more flexible.

func (s *ShardSelector) consistentHashSelect(key string) string {
    keyHash := s.hash32(key)
    var selectedNode string
    minDistance := uint32(^uint32(0))

    for _, node := range s.nodes {
        nodeHash := s.hash32(node)
        var distance uint32
        if nodeHash >= keyHash {
            distance = nodeHash - keyHash
        } else {
            distance = (^uint32(0) - keyHash) + nodeHash
        }

        if distance < minDistance {
            minDistance = distance
            selectedNode = node
        }
    }

    if selectedNode == "" {
        return s.nodes[0]
    }
    return selectedNode
}
Enter fullscreen mode Exit fullscreen mode

How it works:

  • Keys and nodes are hashed onto a virtual ring
  • Keys go to the first node clockwise from their position

✅ Pros:

  • Adding/removing nodes only remaps a small set of keys
  • Better distribution under churn

❌ Cons:

  • Slightly harder to implement
  • Doesn’t guarantee perfect balance (unless you use virtual nodes)

🛠 Building the Shard Selector in Go

Let’s break down the full component.

🌱 Struct Definition

package sharding

type ShardSelector struct {
    nodes    []string
    strategy string
}

func NewShardSelector(nodes []string, strategy string) *ShardSelector {
    return &ShardSelector{
        nodes:    nodes,
        strategy: strategy,
    }
}
Enter fullscreen mode Exit fullscreen mode

🧮 32-bit Hash Function (FNV)

func (s *ShardSelector) hash32(key string) uint32 {
    h := fnv.New32a()
    h.Write([]byte(key))
    return h.Sum32()
}
Enter fullscreen mode Exit fullscreen mode

🧭 Strategy Selector

func (s *ShardSelector) GetRedisURL(key string) string {
    if len(s.nodes) == 0 {
        return "redis://localhost:6379/0"
    }

    switch s.strategy {
    case "hash_mod":
        return s.hashModSelect(key)
    case "consistent_hash":
        return s.consistentHashSelect(key)
    default:
        return s.nodes[0]
    }
}
Enter fullscreen mode Exit fullscreen mode

⚙️ Bootstrapping from Environment

func InitSharding() *ShardSelector {
    nodes := []string{
        os.Getenv("REDIS_NODE_1"),
        os.Getenv("REDIS_NODE_2"),
        os.Getenv("REDIS_NODE_3"),
    }

    strategy := os.Getenv("SHARDING_STRATEGY") // "hash_mod" or "consistent_hash"
    return NewShardSelector(nodes, strategy)
}
Enter fullscreen mode Exit fullscreen mode

🧪 Usage Example

selector := InitSharding()

userKey := "user:8421:rate_limit"
redisURL := selector.GetRedisURL(userKey)
fmt.Println("Selected Redis Node:", redisURL)
Enter fullscreen mode Exit fullscreen mode

This pattern is plug-and-play for any service that relies on Redis.


💼 Reusability Beyond Rate Limiting

The same shard selector can be reused for:

  • Distributed session storage
  • Multi-tenant cache partitioning
  • Microservice request tracking
  • Job queue routing
  • Real-time chat systems

Anywhere Redis becomes a bottleneck — sharding can help.


⚠️ Gotchas and Real-World Considerations

  1. Hot keys dominate:
  • Some keys are hit more often (e.g. "admin", "homepage")
  • You can add virtual nodes or prefix/suffix keys to reduce skew
  1. Node failures:
  • This logic assumes all Redis instances are healthy
  • You’ll need health checks, circuit breakers, and possibly Redis Sentinel or Cluster
  1. Multi-key ops are hard:
  • MGET, transactions, etc. don’t work across shards
  • Design carefully if atomicity is required

🔮 Future Enhancements

  • Add virtual nodes for better key spread
  • Track shard metrics (load, latency, error rates)
  • Automatically skip dead nodes
  • Redis Cluster compatibility

✅ Final Thoughts

Scalability isn’t magic — it’s explicit design.

By thinking about sharding early, you can avoid painful migrations and outages down the line. This Go-based shard selector is simple but powerful — and ready for production use.

If you're building backend systems with Redis at the core, and scale is on the horizon — sharding is not optional. It’s inevitable.


📎 Code & Repo

Check out the complete implementation:
GitHub - AliRizaAynaci/redis-sharding-demo


🗨️ Open to Discussion

Questions? Suggestions? Want to explore advanced shard balancing or Redis Cluster topics?

Let’s connect. Happy to talk distributed systems anytime.

Happy scaling! 🚀

Top comments (0)