In Redis Open Source Cluster, the keyspace is partitioned into 16,384 hash slots. Each slot is served by exactly one node at any point in time, and the cluster achieves high availability through master/replica replication.
Slots are computed using CRC16 and modulo 16,384.
A key design goal of Cluster is no proxies and no server-side merging: clients are redirected (e.g., via MOVED / ASK) to the correct node and execute commands there.
This architecture often leads to a common production symptom: batch reads that are fast on a standalone Redis (e.g., MGET / pipelining) show much higher P99 jitter and occasional timeouts on Cluster.
1. Root Cause: Cross-slot fan-out amplifies tail latency
Redis Cluster has a clear boundary for “complex multi-key operations”: they are supported only when all involved keys hash to the same slot; otherwise, multi-key capabilities are not available.
Therefore, if a set of logically related keys is spread across multiple slots:
- Variadic multi-key commands such as
MGET/MSETare constrained (a common symptom is aCROSSSLOTerror). - Even if
MGETis replaced by multipleGETs or pipelining, the workload still becomes multi-node requests + client-side aggregation (scatter-gather), and overall latency is dominated by the slowest node.
2. Core Mechanism: Hash Tags reduce “M nodes” to “1 node”
Cluster provides hash tags to enable data affinity: force a related group of keys onto the same slot, thereby enabling multi-key operations and significantly reducing cross-node fan-out.
Hash tag rules (strictly per cluster-spec)
Hash tag processing is enabled only when all of the following are true:
- The key contains
{ - There is a
}to the right of that{ - There is at least one character between the first
{and the first}to its right
When valid, Cluster computes the CRC16 only on the substring inside that first valid {...} to determine the slot.
Empty
{}is not a valid hash tag: for example,foo{}{bar}hashes the entire key, not the empty string.
3. Where the benefit comes from: less fan-out, not “faster Redis execution”
Example (Favorites scenario):
Before:
favorites:123:news001…favorites:123:news010
Different full key strings → likely different slots →MGETmay be disallowed or suffer high jitter.After:
favorites:{...}:news001…favorites:{...}:news010
Same tag → same slot → same node → collapse “multi-node concurrent requests + merge” into “single-node one round trip”.
4. Related Issues: practical boundaries and common misconceptions
1) If {...} contents are the same across services, will values collide?
No.
Hash tags only influence which slot/node a key maps to; they do not change key uniqueness. Overwrites happen only when the entire key string is identical.
The real risk is missing namespaces between domains/services, which can cause full-key collisions—not shared tag values.
2) Must the tag contain only a userId?
From a correctness standpoint, using only userId is fine.
From a load and “affinity boundary” standpoint, the tag should express which keys truly must be colocated.
If multiple domains share {userId}, the same user’s traffic across domains tends to concentrate on the same slot (no value confusion, but potential hot-spot stacking). A safer pattern is a composite tag that constrains affinity within a domain:
favorites:{fav:123}:news001orders:{ord:123}:order8899
This preserves multi-key capability within the domain while reducing unnecessary cross-domain slot binding.
3) Even with hash tags, mixing different tags still fails
For example, a single MGET mixing {u1} and {u2} is still cross-slot. Cluster’s constraint remains “same slot”; hash tags only make “same slot” controllable by design.
5. Key Naming and Tag Design Guidelines (production-ready conventions)
This section can be used directly as an internal engineering guideline.
1) Namespacing (mandatory)
Prevent collisions across services/modules.
Recommended prefix structure:
<env>:<service>:<module>:...
Examples:
prod:news:favorites:{fav:123}:news:001test:uc:session:{sess:uid123}:token
2) Tag placement and content (core)
Recommended structure:
<prefix>:{<affinity>}:<suffix>
Where <affinity> defines the “affinity boundary”. Prefer domain + primary dimension:
-
{fav:<userId>}: favorites grouped by user -
{ord:<orderId>}: orders grouped by order -
{cart:<userId>}: cart grouped by user
This yields:
- Stable same-slot behavior within the domain (enabling multi-key / reducing fan-out)
- Avoids unnecessary cross-domain binding even if the dimension is the same (e.g., same userId)
Note: Cluster uses only the first valid
{...}for hashing; empty{}is invalid and the whole key is hashed instead.
3) Fields and hierarchy (readability + extensibility)
Use a consistent delimiter (commonly :) and make object types/fields explicit:
-
prod:news:favorites:{fav:123}:news:001(object: news, id: 001) -
prod:news:favorites:{fav:123}:meta(aggregate metadata) -
prod:uc:profile:{uc:123}:base(user base profile)
4) Versioning (recommended)
Add versions to handle value schema upgrades safely:
prod:news:favorites:v1:{fav:123}:news:001prod:news:favorites:v2:{fav:123}:news:001
5) Skew & hot keys (must evaluate)
Hash tags intentionally pin a group of keys to one slot. If the tag is too coarse, it can create skew. The cluster spec notes that hash tags exist to enable multi-key operations, and forcing too much into one slot leads to imbalance.
Avoid:
- Global tags like
{common}or{config} - Using a single tag to aggregate site-wide data
Top comments (0)