DEV Community

Wahyu Tricahyo
Wahyu Tricahyo

Posted on

Redis Advanced Techniques: Beyond Simple Key-Value Storage

Redis gets introduced as a caching layer, and for many teams, that's where the story ends. But Redis is far more than a cache. It's a data structure server with capabilities that can replace entire categories of infrastructure if you know how to use it. In this post, I'll cover advanced techniques that unlock Redis's full potential, with all examples using TypeScript and ioredis.

Data Structures You're Probably Not Using

Most developers stick to strings and maybe hashes. Redis offers much more, and choosing the right structure for your use case can eliminate complexity from your application layer entirely.

Sorted Sets are one of Redis's most underrated features. They store members with a score, keeping everything ordered automatically. This makes them perfect for leaderboards, priority queues, rate limiters, and time-based feeds.

async function leaderboardExample(): Promise<void> {
  await redis.zadd("leaderboard", 1500, "player:alice");
  await redis.zadd("leaderboard", 2300, "player:bob");
  await redis.zadd("leaderboard", 1800, "player:carol");

  // Top 10 players (highest scores first)
  const topPlayers = await redis.zrevrange("leaderboard", 0, 9, "WITHSCORES");
  // Returns: ["player:bob", "2300", "player:carol", "1800", "player:alice", "1500"]

  // Parse into a usable format
  const parsed: { member: string; score: number }[] = [];
  for (let i = 0; i < topPlayers.length; i += 2) {
    parsed.push({ member: topPlayers[i], score: Number(topPlayers[i + 1]) });
  }

  // Players scoring between 1000 and 2000
  const midRange = await redis.zrangebyscore("leaderboard", 1000, 2000, "WITHSCORES");
  console.log("Mid-range players:", midRange);
}
Enter fullscreen mode Exit fullscreen mode

HyperLogLog solves the problem of counting unique items in a dataset without storing every item. It uses roughly 12KB of memory regardless of how many elements you add, with a standard error of about 0.81%. If you need to count unique visitors, unique search queries, or distinct events, HyperLogLog gives you an approximate count that's good enough for analytics at a fraction of the memory cost.

async function hyperLogLogExample(): Promise<void> {
  const today = "2025-02-14";

  await redis.pfadd(`unique_visitors:${today}`, "user:1001", "user:1002", "user:1003");
  await redis.pfadd(`unique_visitors:${today}`, "user:1001"); // duplicate, won't increase count

  const count = await redis.pfcount(`unique_visitors:${today}`);
  console.log(`Unique visitors today: ${count}`); // ~3

  // Merge multiple days for weekly count
  await redis.pfmerge(
    "unique_visitors:week",
    "unique_visitors:2025-02-10",
    "unique_visitors:2025-02-11"
  );
  const weeklyCount = await redis.pfcount("unique_visitors:week");
  console.log(`Unique visitors this week: ${weeklyCount}`);
}
Enter fullscreen mode Exit fullscreen mode

Streams give you an append-only log structure with consumer group support. Think of them as a lightweight alternative to Kafka for many use cases. They support blocking reads, message acknowledgment, and automatic ID generation, making them a solid choice for event sourcing, task queues, and inter-service communication.

async function streamProducerExample(): Promise<void> {
  const id1 = await redis.xadd("orders", "*", "product", "laptop", "quantity", "1", "customer", "alice");
  const id2 = await redis.xadd("orders", "*", "product", "phone", "quantity", "2", "customer", "bob");
  console.log(`Added orders: ${id1}, ${id2}`);
}

async function streamConsumerExample(): Promise<void> {
  // Create a consumer group (wrap in try/catch since it throws if group exists)
  try {
    await redis.xgroup("CREATE", "orders", "processing", "$", "MKSTREAM");
  } catch (err) {
    // Group already exists, safe to ignore
  }

  // Consumer reads from the group with a 2-second block
  const messages = await redis.xreadgroup(
    "GROUP", "processing", "worker-1",
    "COUNT", "5",
    "BLOCK", 2000,
    "STREAMS", "orders", ">"
  );

  if (messages) {
    for (const [stream, entries] of messages) {
      for (const [id, fields] of entries) {
        console.log(`Processing message ${id}:`, fields);
        // Acknowledge after processing
        await redis.xack("orders", "processing", id);
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Lua Scripting: Atomic Operations on Steroids

When you need multiple Redis commands to execute atomically, Lua scripting is the answer. Scripts run on the server side with no interruption, which means no race conditions and no need for distributed locks in many cases.

A classic example is a rate limiter. You need to check the current count, compare it to a limit, and either increment or reject, all without another request sneaking in between steps.

const rateLimiterScript = `
  local key = KEYS[1]
  local limit = tonumber(ARGV[1])
  local window = tonumber(ARGV[2])

  local current = redis.call("INCR", key)
  if current == 1 then
    redis.call("EXPIRE", key, window)
  end
  if current > limit then
    return 0
  end
  return 1
`;

// Define custom commands for reuse
redis.defineCommand("rateLimit", {
  numberOfKeys: 1,
  lua: rateLimiterScript,
});

// Extend the Redis type so TypeScript knows about our custom command
declare module "ioredis" {
  interface RedisCommander<Context> {
    rateLimit(key: string, limit: string, window: string): Promise<number>;
  }
}

async function checkRateLimit(userId: string): Promise<boolean> {
  // Allow 100 requests per 60 seconds
  const allowed = await redis.rateLimit(`ratelimit:${userId}`, "100", "60");
  return allowed === 1;
}

async function handleRequest(userId: string): Promise<void> {
  const isAllowed = await checkRateLimit(userId);
  if (!isAllowed) {
    console.log("Rate limited!");
    return;
  }
  console.log("Request processed");
}
Enter fullscreen mode Exit fullscreen mode

Keep your Lua scripts short and focused. Long-running scripts block the entire Redis instance since Redis is single-threaded. If your script takes more than a few milliseconds, rethink your approach.

Pipelining and Transactions

Pipelining is one of the simplest ways to boost Redis performance. Instead of sending commands one at a time and waiting for each response, you batch them together. This cuts down on network round trips dramatically. In high-throughput scenarios, pipelining alone can give you a 5-10x improvement.

async function pipelineExample(): Promise<void> {
  const pipeline = redis.pipeline();

  for (let i = 0; i < 1000; i++) {
    pipeline.set(`key:${i}`, `value:${i}`);
  }

  const results = await pipeline.exec();
  // One round trip instead of 1000
  console.log(`Executed ${results?.length} commands in a single round trip`);
}

// You can also read results from a pipeline
async function pipelineReadExample(): Promise<void> {
  const pipeline = redis.pipeline();

  pipeline.get("user:1001:name");
  pipeline.get("user:1001:email");
  pipeline.zrevrange("leaderboard", 0, 4, "WITHSCORES");

  const results = await pipeline.exec();
  // results is [[null, "Alice"], [null, "alice@example.com"], [null, [...]]]
  // Each entry is [error, result]

  if (results) {
    const [nameErr, name] = results[0];
    const [emailErr, email] = results[1];
    const [leaderboardErr, topPlayers] = results[2];
    console.log(`User: ${name}, Email: ${email}`);
  }
}
Enter fullscreen mode Exit fullscreen mode

Transactions with MULTI/EXEC guarantee that a batch of commands runs sequentially without interleaving from other clients. Combined with watch, you get optimistic locking for check-and-set operations.

async function transactionExample(): Promise<boolean> {
  // Optimistic locking: decrement inventory only if it hasn't changed
  await redis.watch("inventory:laptop");

  const currentStock = await redis.get("inventory:laptop");
  const stock = Number(currentStock);

  if (stock <= 0) {
    await redis.unwatch();
    console.log("Out of stock");
    return false;
  }

  const result = await redis
    .multi()
    .set("inventory:laptop", String(stock - 1))
    .exec();

  // result is null if WATCH detected a change
  if (!result) {
    console.log("Conflict detected, retry needed");
    return false;
  }

  console.log("Purchase successful");
  return true;
}
Enter fullscreen mode Exit fullscreen mode

The important distinction: pipelining reduces latency by batching network calls, while transactions ensure atomicity. You can combine them, but they solve different problems.

Pub/Sub and Beyond

Redis Pub/Sub gives you simple, fire-and-forget messaging. It's great for broadcasting events like cache invalidation signals, live notifications, or real-time updates. But it comes with a major caveat: messages are not persisted. If a subscriber is disconnected when a message is published, that message is lost forever.

// Important: ioredis requires a separate connection for subscriptions because
// once a connection enters subscriber mode, it can only run subscribe/unsubscribe
// commands — all other commands like GET or SET will be rejected.
const subscriber = new Redis();
const publisher = new Redis();

interface NotificationPayload {
  type: string;
  from: string;
  message: string;
}

// Subscriber
subscriber.subscribe("notifications:user:1001", (err, count) => {
  if (err) console.error("Subscribe error:", err);
  console.log(`Subscribed to ${count} channel(s)`);
});

subscriber.on("message", (channel: string, message: string) => {
  const payload: NotificationPayload = JSON.parse(message);
  console.log(`[${channel}] ${payload.type} from ${payload.from}: ${payload.message}`);
});

// Publisher (from another part of your app)
async function sendNotification(userId: string, payload: NotificationPayload): Promise<void> {
  await publisher.publish(`notifications:${userId}`, JSON.stringify(payload));
}

// Pattern-based subscriptions
subscriber.psubscribe("notifications:*", (err) => {
  if (err) console.error("Pattern subscribe error:", err);
});

subscriber.on("pmessage", (pattern: string, channel: string, message: string) => {
  console.log(`[pattern: ${pattern}] [channel: ${channel}] ${message}`);
});
Enter fullscreen mode Exit fullscreen mode

For cases where you need message durability, use Streams instead. Streams retain messages, support consumer groups for load balancing, and let you replay history. Pub/Sub is for ephemeral, real-time broadcasting; Streams are for reliable message processing.

Keyspace Notifications are a specialized form of Pub/Sub that lets you subscribe to events happening inside Redis itself. You can get notified when keys expire, when a key is deleted, or when any command modifies data. This is useful for building reactive systems, cache invalidation callbacks, or session expiry handling.

async function keyspaceNotifications(): Promise<void> {
  const sub = new Redis();

  // Enable keyspace notifications for expired events
  await redis.config("SET", "notify-keyspace-events", "Ex");

  // Subscribe to expiration events on db 0
  sub.subscribe("__keyevent@0__:expired", (err) => {
    if (err) console.error("Keyspace subscribe error:", err);
  });

  sub.on("message", (channel: string, expiredKey: string) => {
    console.log(`Key expired: ${expiredKey}`);
    // Handle session cleanup, cache refresh, etc.
  });

  // Test it: set a key that expires in 5 seconds
  await redis.set("session:temp", "data", "EX", 5);
}
Enter fullscreen mode Exit fullscreen mode

Memory Optimization Techniques

Redis keeps everything in memory, so efficiency matters. Here are strategies to reduce your memory footprint without sacrificing functionality.

Use hashes for small objects. Redis has a special memory optimization where small hashes (under hash-max-ziplist-entries and hash-max-ziplist-value thresholds, renamed to hash-max-listpack-entries / hash-max-listpack-value in Redis 7+) are stored as a compact structure instead of a full hash table. Grouping related keys into hashes can cut memory usage significantly.

interface UserProfile {
  name: string;
  email: string;
  plan: string;
}

// Instead of multiple individual keys (memory-heavy):
async function inefficientStorage(userId: string, profile: UserProfile): Promise<void> {
  await redis.set(`user:${userId}:name`, profile.name);
  await redis.set(`user:${userId}:email`, profile.email);
  await redis.set(`user:${userId}:plan`, profile.plan);
}

// Use a single hash (memory-efficient):
async function efficientStorage(userId: string, profile: UserProfile): Promise<void> {
  await redis.hset(`user:${userId}`, profile);
}

async function getProfile(userId: string): Promise<UserProfile | null> {
  const data = await redis.hgetall(`user:${userId}`);
  if (!data || Object.keys(data).length === 0) return null;
  return data as unknown as UserProfile;
}

// Partial reads and updates
async function updatePlan(userId: string, newPlan: string): Promise<void> {
  await redis.hset(`user:${userId}`, "plan", newPlan);
}

async function getEmail(userId: string): Promise<string | null> {
  return redis.hget(`user:${userId}`, "email");
}
Enter fullscreen mode Exit fullscreen mode

Set appropriate TTLs on everything. Stale data accumulating without expiration is the most common cause of unexpected memory growth.

// Set TTL at write time
async function cacheWithTTL(key: string, value: string, ttlSeconds: number): Promise<void> {
  await redis.set(key, value, "EX", ttlSeconds);
}

// Helper: cache with automatic JSON serialization
async function cacheJSON<T>(key: string, data: T, ttlSeconds: number): Promise<void> {
  await redis.set(key, JSON.stringify(data), "EX", ttlSeconds);
}

async function getCachedJSON<T>(key: string): Promise<T | null> {
  const raw = await redis.get(key);
  if (!raw) return null;
  try {
    return JSON.parse(raw) as T;
  } catch {
    // Corrupted or unexpected data - delete the bad key and return a cache miss
    await redis.del(key);
    return null;
  }
}
Enter fullscreen mode Exit fullscreen mode

Use object and memory commands to understand how Redis is actually storing your data.

async function inspectKey(key: string): Promise<void> {
  const encoding = await redis.object("ENCODING", key);
  const memoryUsage = await redis.memory("USAGE", key);
  const ttl = await redis.ttl(key);

  console.log(`Key: ${key}`);
  console.log(`  Encoding: ${encoding}`);     // "ziplist", "hashtable", "embstr", etc.
  console.log(`  Memory: ${memoryUsage} bytes`);
  console.log(`  TTL: ${ttl === -1 ? "no expiry" : `${ttl}s`}`);
}
Enter fullscreen mode Exit fullscreen mode

Redis Cluster and High Availability

For production workloads, a single Redis instance is a single point of failure. ioredis has first-class support for both Sentinel and Cluster modes.

// Sentinel setup for automatic failover
const sentinelRedis = new Redis({
  sentinels: [
    { host: "sentinel-1", port: 26379 },
    { host: "sentinel-2", port: 26379 },
    { host: "sentinel-3", port: 26379 },
  ],
  name: "mymaster",
});

// Cluster setup for horizontal scaling
const cluster = new Redis.Cluster([
  { host: "node-1", port: 6380 },
  { host: "node-2", port: 6381 },
  { host: "node-3", port: 6382 },
], {
  redisOptions: {
    password: "your_password",
  },
  scaleReads: "replica", // Read from replicas to distribute load
});
Enter fullscreen mode Exit fullscreen mode

Hash tags ensure related keys land on the same node, which is required for multi-key operations in cluster mode.

async function clusterSafeOperations(userId: string): Promise<void> {
  const pipeline = cluster.pipeline();

  pipeline.set(`{user:${userId}}:profile`, JSON.stringify({ name: "Alice" }));
  pipeline.set(`{user:${userId}}:sessions`, JSON.stringify(["sess_abc"]));
  pipeline.set(`{user:${userId}}:preferences`, JSON.stringify({ theme: "dark" }));

  await pipeline.exec();

  // Multi-key operations work because all keys share the same hash tag
  const results = await cluster.mget(
    `{user:${userId}}:profile`,
    `{user:${userId}}:sessions`,
    `{user:${userId}}:preferences`
  );
  console.log("All user data:", results);
}
Enter fullscreen mode Exit fullscreen mode

Cross-slot operations fail in cluster mode. You cannot run mget, sunion, or Lua scripts across keys on different nodes. Design your key structure around hash tags from the start if you plan to use cluster mode. Retrofitting this later is painful.

Persistence: RDB vs AOF

Understanding Redis persistence options prevents data loss surprises.

RDB snapshots create point-in-time dumps at configured intervals. They're compact, fast to load, and great for backups, but you can lose data between snapshots. AOF (Append Only File) logs every write operation, giving you much better durability at the cost of larger files and slightly more I/O.

The recommended production setup is to use both: AOF for durability with appendfsync everysec (a good compromise between safety and performance), and RDB for fast restarts and backups. Redis 7+ supports Multi-Part AOF, which eliminates the need for periodic AOF rewrites and simplifies operations.

# redis.conf
save 900 1          # RDB: snapshot after 900s if at least 1 key changed
save 300 10         # RDB: snapshot after 300s if at least 10 keys changed
appendonly yes      # Enable AOF
appendfsync everysec # Fsync once per second
Enter fullscreen mode Exit fullscreen mode

You can also trigger persistence operations programmatically through ioredis if needed for backup workflows.

async function triggerBackup(): Promise<void> {
  // Trigger a background RDB snapshot
  await redis.bgsave();

  // Or trigger an AOF rewrite
  await redis.bgrewriteaof();

  // Check last save time
  const lastSave = await redis.lastsave();
  console.log(`Last RDB save: ${new Date(lastSave * 1000).toISOString()}`);
}
Enter fullscreen mode Exit fullscreen mode

Wrapping Up

Redis is deceptively simple on the surface, but there's enormous depth once you move beyond basic string operations. Sorted sets, streams, and Lua scripting alone can replace entire microservices worth of logic. Combine those with proper memory optimization, pipelining, and a well-planned cluster topology, and you have an infrastructure component that punches far above its weight.

Pick one technique here that applies to a problem you're currently solving. Implement it, benchmark it, and you'll likely find that Redis was already capable of something you were building from scratch elsewhere.

Top comments (0)