DEV Community

KamrAn
KamrAn

Posted on

Redis Zero to Hero: The Complete Mastery Series

Complete Redis CLI Reference: All Core Data Types & Essential Operations

Note: "Course notes refined by AI for better organization."

TOC

  1. Strings
  2. Lists
  3. Hashes
  4. Sets
  5. Sorted Sets (ZSets)
  6. Streams (very common since Redis 5.0 for logs, queues, event sourcing)
  7. Bitmaps (space-efficient for boolean flags, analytics)
  8. HyperLogLog (cardinality estimation)
  9. Geospatial indexes (GEO commands)
  10. Common keyspace operations + expiration, transactions, pub/sub basics, scripting intro

Redis is an in-memory data structure store used as a database, cache, message broker, and more. It is single-threaded (most operations are atomic), extremely fast, and persists data optionally.

1. Getting Started with Redis CLI

Starting the Server

Redis usually runs as a daemon. Start it in the background:

redis-server &                 # default port 6379, foreground → ^C to stop
redis-server --port 6380 &     # custom port
Enter fullscreen mode Exit fullscreen mode

Verify it's running:

ps -ef | grep redis-server
redis-cli ping                 # → PONG (confirms server is alive)
Enter fullscreen mode Exit fullscreen mode

Connecting to Redis CLI

redis-cli                      # default localhost:6379
redis-cli -p 6380              # custom port
redis-cli -h 192.168.1.100 -p 6379  # remote host
redis-cli -a mypassword        # with auth (if requirepass is set)
Enter fullscreen mode Exit fullscreen mode

Tip: Use --raw for cleaner output in scripts, or -n 5 to select database 5 (Redis has 16 databases by default: 0–15).

Common mistake: Forgetting to flush data between tests → use FLUSHDB (current db) or FLUSHALL (everything — dangerous in production!).


2. Redis Strings – The Most Versatile Data Type

Strings in Redis are binary-safe byte sequences (max ~512 MB). They can hold:

  • Plain text
  • Integers (for atomic counters)
  • Floats
  • Serialized JSON / Protobuf / MessagePack
  • Binary data (images, encrypted tokens, etc.)

Why Strings Matter

  • Most basic & fastest type (O(1) for most ops)
  • Used for caching, sessions, counters, locks, rate-limiting flags
  • Atomic integer/float operations → perfect for concurrent systems

Core String Commands

SET – Create / overwrite a string

SET name "Kamran"             # OK
SET name Kamran               # quotes optional for simple values
SET counter 42
Enter fullscreen mode Exit fullscreen mode

Options (very powerful):

  • NX – set only if not exists (used for distributed locks)
  • XX – set only if exists
  • EX seconds / PX milliseconds – set with expiration
SET token abc123 NX EX 3600   # one-time token, expires in 1h
SET config:feature:x false XX # update only if already exists
Enter fullscreen mode Exit fullscreen mode

GET – Retrieve value

GET name          # → "Kamran"
GET nonexistent   # → (nil)
Enter fullscreen mode Exit fullscreen mode

Advanced String Operations

# Substring extraction (0-based, inclusive)
GETRANGE name 0 4         # → "Kamra"  (first 5 chars)

# String length
STRLEN name               # → 6
STRLEN nonexistent        # → 0

# Append to existing string
APPEND name " Anjum"      # → 11 (new length)
GET name                  # → "Kamran Anjum"

# Set and get old value atomically
GETSET oldkey newvalue    # returns previous value or nil
Enter fullscreen mode Exit fullscreen mode

Multiple keys (atomic batch)

MSET email "kamran@dev.com" age 23 city Khost
MGET name email age city     # returns array
Enter fullscreen mode Exit fullscreen mode

Expiration & TTL

SET temp "will disappear"
EXPIRE temp 30                    # seconds
SETEX shortlived 10 "gone soon"   # set + expire in one command

TTL temp                          # remaining seconds (integer)
                                  # → -2 = doesn't exist
                                  # → -1 = no expiration
                                  # → positive = seconds left

PERSIST temp                 # remove expiration
Enter fullscreen mode Exit fullscreen mode

Integer & Float Counters (atomic & thread-safe)

Redis parses string as number when using these:

SET views 0
INCR views                   # → 1
INCRBY views 15              # → 16
DECR views                   # → 15
DECRBY views 5               # → 10

SET price 99.99
INCRBYFLOAT price 0.01       # → "100"
INCRBYFLOAT price -1.5       # → "-0.5" (negative ok)
Enter fullscreen mode Exit fullscreen mode

Real-world examples

# Rate limiting (simple fixed window)
SET rate:kamran:2026-02-02 0 EX 86400
INCR rate:kamran:2026-02-02
# Later check if > 100 → block

# Distributed lock (very common pattern)
SET lock:resource:123 my-unique-value NX PX 30000   # 30s lock
# ... do work ...
DEL lock:resource:123   # release (only if value matches in production!)
Enter fullscreen mode Exit fullscreen mode

Common mistakes & misconceptions

  • Thinking INCR works on floats → no, use INCRBYFLOAT
  • Using very long keys → impacts memory & performance
  • Forgetting NX when implementing locks → race conditions
  • Assuming GET on non-string → WRONGTYPE error

Quick Summary – Strings Cheat Sheet

  • SET key value [NX|XX] [EX sec|PX ms] → create/overwrite (+ options)
  • GET key → value or (nil)
  • MSET / MGET → batch atomic operations
  • INCR / INCRBY / DECR / DECRBY → integer counters (start from 0)
  • INCRBYFLOAT → floating point
  • GETRANGE / STRLEN / APPEND → string manipulation
  • SETEX / EXPIRE / TTL / PERSIST → expiration control
  • Key rules: strings are binary-safe, max ~512 MB, most ops O(1)

3. Redis Lists – Ordered Collections of Strings

Lists are doubly-linked lists of strings → ideal for queues, stacks, timelines, task queues.

Why Lists?

  • Preserve insertion order
  • Efficient head/tail operations (O(1))
  • Blocking pops → perfect for producer-consumer patterns
  • Can act as bounded queue with LTRIM

Core List Commands (l = left/head, r = right/tail)

Adding elements

LPUSH users Alice Bob      # → inserts left → list = [Bob, Alice]
RPUSH users Charlie        # → inserts right → [Bob, Alice, Charlie]

LPUSHX users Dave          # only if list already exists
Enter fullscreen mode Exit fullscreen mode

Reading elements

LRANGE users 0 -1          # all elements (most common)
LRANGE users 0 9           # first 10
LRANGE users -5 -1         # last 5

LINDEX users 2             # element at index 2 (0-based) or (nil)
LLEN users                 # length (0 if missing)
Enter fullscreen mode Exit fullscreen mode

Removing & popping

LPOP users                 # remove & return leftmost → "Bob"
RPOP users                 # remove & return rightmost → "Charlie"

# Blocking versions (wait if empty)
BLPOP inbox 30             # wait up to 30s, return [key, value] or nil
BRPOP tasks 0              # wait forever (common in workers)
Enter fullscreen mode Exit fullscreen mode

Modifying lists

LSET users 0 "Alicia"      # update index 0

# Insert before/after pivot
LINSERT users BEFORE Charlie Dana
LINSERT users AFTER Alice Eve

# Trim to range (keep only slice)
RPUSH log event1 event2 event3 event4 event5
LTRIM log -3 -1            # keep last 3 → [event3, event4, event5]
Enter fullscreen mode Exit fullscreen mode

Moving between lists (atomic)

LMOVE pending processing LEFT RIGHT   # move head of pending → tail of processing
BLMOVE pending done LEFT RIGHT 60     # blocking version
Enter fullscreen mode Exit fullscreen mode

Sorting (not in-place)

SORT users ALPHA           # lexicographical sort → returns new list
SORT users ALPHA DESC
Enter fullscreen mode Exit fullscreen mode

Real-world patterns

# Simple queue (FIFO)
RPUSH jobs "render:video:uuid123"
BRPOP jobs 0               # worker waits forever

# Recent items (capped)
LPUSH events "user:login:kamran"
LTRIM events 0 999         # keep only newest 1000

# Task pipeline
LPUSH pending "email:welcome:uuid"
LMOVE pending working LEFT RIGHT
# ... process ...
RPUSH completed "email:welcome:uuid"
Enter fullscreen mode Exit fullscreen mode

Performance & complexity notes

  • LPUSH / RPUSH / LPOP / RPOP → O(1)
  • LRANGE / LINDEX → O(n) in worst case (avoid very large offsets on huge lists)
  • LTRIM → O(n) (n = trimmed elements)
  • Use lists for < 10,000–100,000 elements typically; very large lists slow down range ops

Common mistakes

  • Using LINDEX or LSET on huge lists (slow!)
  • Forgetting blocking pops → CPU-spinning polling loops
  • Storing millions of small lists → memory fragmentation
  • Not trimming capped logs → unbounded memory growth

Quick Summary – Lists Cheat Sheet

  • LPUSH / RPUSH → add to head / tail (O(1))
  • LPOP / RPOP → remove from head / tail (O(1))
  • BLPOP / BRPOP → blocking pop (producer-consumer)
  • LRANGE 0 -1 → get entire list
  • LLEN → length
  • LTRIM start stop → cap list size
  • LMOVE / BLMOVE → atomic move between lists
  • LSET / LINSERT / LINDEX → direct index access (use sparingly)
  • Stack → LPUSH + LPOP
  • Queue → RPUSH + LPOP (or LPUSH + RPOP)

4. Hashes – Field-Value Maps (Objects)

Hashes store a map of fields → string values under one key — ideal for objects (users, products, settings).

Why Hashes?

  • Memory efficient vs many small string keys (ziplist vs hashtable encoding)
  • Atomic field operations
  • Used for: user profiles, shopping carts, configuration objects

Internals note: Small hashes use ziplist (compact, sequential), larger ones use hashtable (faster random access).

Core Hash Commands

# Create / update fields
HSET user:100 name "Kamran" age 24 city "Khost"

# Set multiple fields atomically
HMSET product:42 name "Laptop" price 1200 stock 15   # older syntax, still works
HSET product:42 "inStock" true "tags" "electronics,gaming"  # modern preferred

# Get one field
HGET user:100 name          # → "Kamran"

# Get multiple fields
HMGET user:100 name age city nonexistent   # → list of values, nil for missing

# Get all fields & values
HGETALL user:100            # → field1 value1 field2 value2 ...

# Get only fields or only values
HKEYS user:100
HVALS user:100

# Check if field exists
HEXISTS user:100 age        # → 1 or 0

# Increment field (integer or float)
HINCRBY user:100 visits 1
HINCRBYFLOAT product:42 price 99.99

# Delete fields
HDEL user:100 city oldfield

# Length (number of fields)
HLEN user:100

# Remove entire hash
DEL user:100
Enter fullscreen mode Exit fullscreen mode

Real-world patterns

# User session / profile
HSET session:abc123 user_id 100 last_active 1738500000 token "xyz..."
HINCRBY session:abc123 page_views 1

# Shopping cart
HSET cart:kamran-2026 item:sku123 quantity 2 price 49.99
HINCRBY cart:kamran-2026 item:sku456 quantity 1

# Atomic counter inside hash
HINCRBY rate_limit:user:kamran:2026-02 api_calls 1
Enter fullscreen mode Exit fullscreen mode

Common mistakes

  • Storing huge values in fields → use separate keys or JSON in string
  • Using HGETALL on very large hashes → slow; use HSCAN instead
  • Forgetting small hashes are memory-optimized only up to ~512 fields / ~64 bytes per value (configurable via hash-max-ziplist-entries etc.)

Quick Summary – Hashes Cheat Sheet

  • HSET / HMSET — set field(s)
  • HGET / HMGET / HGETALL — read
  • HINCRBY / HINCRBYFLOAT — atomic counters
  • HEXISTS / HLEN / HKEYS / HVALS — introspection
  • HDEL — remove field
  • Pattern: objects, profiles, carts, configs
  • Avoid HGETALL on large hashes → prefer HSCAN

5. Sets – Unordered Unique Collections

Sets hold unique strings (no duplicates, no order).

Why Sets?

  • Fast membership checks (O(1))
  • Set operations: union, intersection, difference
  • Used for: tags, unique visitors, friends lists, deduplication

Core Set Commands

SADD tags "redis" "database" "cache"

# Add multiple
SADD visited:2026-02 user1 user2 user3

# Check membership
SISMEMBER tags redis       # → 1 or 0

# Get all members
SMEMBERS tags              # → unordered list

# Remove
SREM tags "cache"

# Cardinality (size)
SCARD tags

# Random member(s)
SRANDMEMBER tags           # one random
SRANDMEMBER tags 3         # up to 3 random (with possible repeats unless count negative)

# Pop (remove & return)
SPOP tags

# Set operations (non-destructive)
SUNION tags tags:old
SINTER tags tags:premium   # common tags
SDIFF tags tags:blocked    # tags in first but not second

# Store result
SUNIONSTORE all_tags tags tags:old
Enter fullscreen mode Exit fullscreen mode

Real-world examples

# Unique visitors per day
SADD unique_visitors:2026-02-02 "kamran-ip" "guest123"

# Friends graph (basic)
SADD friends:kamran friendA friendB
SINTER friends:kamran friends:ali   # mutual friends
Enter fullscreen mode Exit fullscreen mode

Pitfalls

  • SMEMBERS on huge sets → slow; use SSCAN
  • No ordering → use Sorted Sets if needed

Quick Summary – Sets Cheat Sheet

  • SADD / SREM / SISMEMBER
  • SCARD — size
  • SMEMBERS / SSCAN — read
  • SUNION / SINTER / SDIFF (+ ...STORE variants)
  • Pattern: unique items, tags, membership, intersections

6. Sorted Sets (ZSets) – Unique Elements with Scores

Sorted Sets = Sets + ordering by floating-point score

Why?

  • Leaderboards, priority queues, time-series with scores, range queries
  • Fast rank lookup, range by score or rank

Core Commands

ZADD leaderboard 1500 "player1" 1200 "player2" 1800 "kamran"

# Increment score
ZINCRBY leaderboard 100 "kamran"   # → 1900

# Get rank (0-based, low→high score)
ZRANK leaderboard kamran            # → 0 (top if highest score first)

# Reverse rank (high→low)
ZREVRANK leaderboard kamran

# By score range
ZRANGEBYSCORE leaderboard 1000 2000 WITHSCORES

# By rank range
ZRANGE leaderboard 0 9 WITHSCORES   # top 10

# Remove
ZREM leaderboard "player2"

# Cardinality
ZCARD leaderboard

# Count in score range
ZCOUNT leaderboard 1500 +inf

# Many more: ZREMRANGEBYRANK, ZREMRANGEBYSCORE, ZLEXCOUNT...
Enter fullscreen mode Exit fullscreen mode

Real-world patterns

# Daily leaderboard
ZADD scores:2026-02-02 850 "kamran" 920 "ali"

# Time-series (score = timestamp)
ZADD events $(date +%s) "user:login:kamran"

# Priority queue
ZADD tasks 10 "low" 1 "urgent" 5 "medium"
ZRANGE tasks 0 0   # pop highest priority
Enter fullscreen mode Exit fullscreen mode

Pitfalls

  • Scores are floats → precision issues possible
  • Large ranges slow → paginate with LIMIT

Quick Summary – Sorted Sets Cheat Sheet

  • ZADD / ZINCRBY
  • ZRANGE / ZRANGEBYSCORE / ZREVRANGE
  • ZRANK / ZREVRANK
  • ZREM / ZCARD / ZCOUNT
  • Pattern: leaderboards, rankings, priorities, time-ordered unique events

7. Streams – Append-Only Logs (Redis 5.0+)

Streams = log-like structure, consumer groups, blocking reads → message queues / event sourcing.

Core Commands

# Add event
XADD mystream * sensor temp 23.5   # * = auto ID (timestamp-seq)

# Read range
XRANGE mystream - + COUNT 10

# Read new events (blocking)
XREAD BLOCK 5000 STREAMS mystream $

# Consumer group
XGROUP CREATE mystream group1 $

# Read as consumer
XREADGROUP GROUP group1 consumer1 COUNT 10 BLOCK 0 STREAMS mystream >
Enter fullscreen mode Exit fullscreen mode

Pattern: reliable queues, event sourcing, change data capture.

(Deep dive possible in next part if requested.)

8. Bitmaps, HyperLogLog, Geo, & Keyspace Essentials

Bitmaps → SETBIT / GETBIT / BITCOUNT (analytics, user flags)

HyperLogLog → PFADD / PFCOUNT (approx unique count, e.g. daily uniques)

GEO → GEOADD / GEORADIUS / GEOSEARCH (nearby users, stores)

Key operations everyone uses:

KEYS *                # dangerous in prod — use SCAN
SCAN 0 MATCH user:* COUNT 100

EXISTS key
DEL key1 key2

TYPE key              # string, list, hash, set, zset, stream...

EXPIRE / PEXPIRE / TTL / PTTL
PERSIST

RANDOMKEY
DBSIZE                # keys in current db
Enter fullscreen mode Exit fullscreen mode

Danger zone commands (prod caution):

FLUSHDB / FLUSHALL
CONFIG GET/SET
INFO / MONITOR / SLOWLOG
Enter fullscreen mode Exit fullscreen mode

Advanced Redis CLI: Streams, Pub/Sub, Scripting, Transactions, Persistence, Performance & Modules

9. Redis Streams – Append-Only Logs with Consumer Groups (Redis 5.0+)

Streams are append-only message logs with powerful consumption features — the closest Redis gets to Kafka-like durable queues / event sourcing.

Why Streams exist

  • Pub/Sub is fire-and-forget → messages lost if consumer offline
  • Lists are simple queues but lack history, consumer groups, acknowledgments
  • Streams offer: persistence, replayability, consumer groups (load balancing), acknowledgments, range queries, blocking reads

Message ID format

Every entry gets an auto-generated ID: timestamp-seq (e.g., 1738501234567-0)

  • Timestamp = milliseconds since Unix epoch
  • Seq = incrementing number for same-ms entries

Core Commands – Producing

# Append entry (* = auto ID)
XADD mystream * event login user:kamran ip:203.0.113.5 country:PK

# Append with explicit ID (rare)
XADD mystream 1738500000000-0 temp 23.5 humidity 65

# Trim to keep newest N entries (auto-capping)
XTRIM mystream MAXLEN ~ 10000   # approximate, faster
Enter fullscreen mode Exit fullscreen mode

Reading without groups (simple / historical)

# Range query
XRANGE mystream - + COUNT 10            # all, first 10
XRANGE mystream 1738490000000-0 +       # from ID onward

# Reverse
XREVRANGE mystream + - COUNT 5          # newest 5

# Blocking read new messages (poll style)
XREAD BLOCK 5000 STREAMS mystream $     # $ = newest, wait up to 5s
Enter fullscreen mode Exit fullscreen mode

Consumer Groups – The Powerful Part (coordinated, at-least-once delivery)

  1. Create group
XGROUP CREATE mystream payments-group $ MKSTREAM
# $ = from newest   MKSTREAM = create stream if missing
# 0 = from beginning (replay all)
Enter fullscreen mode Exit fullscreen mode
  1. Read as consumer
XREADGROUP GROUP payments-group worker-1 COUNT 10 BLOCK 0 STREAMS mystream >
# > = new messages never delivered to group
# BLOCK 0 = wait forever
Enter fullscreen mode Exit fullscreen mode
  1. Acknowledge processed message (remove from PEL – Pending Entries List)
XACK mystream payments-group 1738501234567-3
Enter fullscreen mode Exit fullscreen mode
  1. Inspect pending / claim stalled
XPENDING mystream payments-group - + 10   # summary
XPENDING mystream payments-group - + 100 worker-1   # details for consumer

# Claim idle messages (> 60s pending)
XCLAIM mystream payments-group worker-2 60000 1738501234567-3
Enter fullscreen mode Exit fullscreen mode

Real-world patterns

  • Reliable task queue: RPUSH → XADD, BRPOP → XREADGROUP + XACK
  • Event sourcing / audit log: append domain events, replay with XRANGE
  • Change data capture: app → XADD → workers process & acknowledge
  • Fan-out + load balancing: multiple workers in same group auto-balance

Common mistakes

  • Forgetting XACK → messages stay in PEL forever (memory leak)
  • Using XREAD instead of XREADGROUP for scalable consumers
  • Not trimming → unbounded growth (use MAXLEN / ~)
  • BLOCK 0 without careful error handling → stuck workers

Quick Summary – Streams Cheat Sheet

  • XADD key * field value … → append
  • XTRIM key MAXLEN ~ N → cap size
  • XGROUP CREATE key group $ → create group
  • XREADGROUP GROUP group consumer COUNT n BLOCK ms STREAMS key > → consume new
  • XACK key group id → acknowledge
  • XPENDING / XCLAIM → inspect & recover stalled
  • XRANGE / XREVRANGE → history / replay
  • Pattern: durable queues, event logs, CDC, background jobs

10. Pub/Sub – Real-time Broadcasting

Pub/Sub = classic publish-subscribe: fire-and-forget, ephemeral messages.

Why Pub/Sub

  • Ultra-low latency (~microseconds)
  • Fan-out to many subscribers
  • No persistence → lost if offline

Vs Streams (2025–2026 consensus)

  • Pub/Sub → real-time notifications, live chat, ephemeral events (speed > durability)
  • Streams → durable queues, event sourcing, at-least-once (reliability > speed)

Core Commands

# Subscribe
SUBSCRIBE chat:room1 chat:notifications

# Pattern subscribe
PSUBSCRIBE chat:room:*

# Publish
PUBLISH chat:room1 "Kamran joined from Khost"

# Unsubscribe
UNSUBSCRIBE
PUNSUBSCRIBE
Enter fullscreen mode Exit fullscreen mode

Server-side channels info

PUBSUB CHANNELS chat:*
PUBSUB NUMSUB chat:room1
PUBSUB NUMPAT
Enter fullscreen mode Exit fullscreen mode

Patterns & pitfalls

  • Chat apps, live scores, real-time UI updates
  • No history → combine with Streams for persistence if needed
  • No acknowledgment → lost messages common
  • High fan-out → CPU/memory spike with thousands of subscribers

Quick Summary – Pub/Sub Cheat Sheet

  • PUBLISH channel message
  • SUBSCRIBE / PSUBSCRIBE channel|pattern
  • UNSUBSCRIBE / PUNSUBSCRIBE
  • PUBSUB CHANNELS / NUMSUB / NUMPAT
  • Use when: ephemeral, real-time broadcast
  • Avoid when: need durability, ack, replay

11. Lua Scripting – Atomic Server-Side Logic

Lua scripts run atomically on server — perfect for complex read-modify-write without race conditions.

Core Commands

# Execute script (inline)
EVAL "return redis.call('SET', KEYS[1], ARGV[1])" 1 mykey value

# With SHA (faster, cached)
SCRIPT LOAD "return redis.call('INCR', KEYS[1])"
EVALSHA <sha> 1 counter

# Debug / manage
SCRIPT KILL
SCRIPT FLUSH
Enter fullscreen mode Exit fullscreen mode

Best practices (2025–2026)

  • Keep scripts short (< few ms)
  • Use KEYS for keys, ARGV for values → Redis parses them
  • Prefer EVALSHA after first LOAD (avoids re-compilation)
  • Return simple types (integer, string, table → multi-bulk)

Security warning – Critical (CVE-2025-49844 "RediShell")

  • Lua use-after-free → RCE possible via crafted script (CVSS 10.0)
  • Affects versions before ~7.2.11 / 7.4.6 / 8.0.4 / 8.2.2
  • Mitigation: upgrade immediately, restrict EVAL/EVALSHA via ACLs if unused, run as non-root, firewall Redis

Patterns

  • Atomic rate limiting, inventory decrement, distributed locks (Redlock variant)
  • Complex counters, multi-key ops without transactions

Quick Summary – Lua Scripting Cheat Sheet

  • EVAL script numkeys KEYS… ARGV…
  • EVALSHA sha numkeys KEYS… ARGV…
  • SCRIPT LOAD / KILL / FLUSH
  • Atomic multi-key logic
  • Security: patch CVE-2025-49844, restrict via ACL if possible

12. Transactions + WATCH – Atomic Batches & Optimistic Locking

MULTI / EXEC → queue commands, run atomically

With WATCH → optimistic locking (check-and-set)

Core Flow

WATCH balance account:kamran
MULTI
  DECRBY balance 100
  INCRBY account:kamran:spent 100
EXEC               # null if watched keys changed
Enter fullscreen mode Exit fullscreen mode

Patterns

  • Money transfer: watch both accounts
  • Versioned updates: watch key, check version, update if same
  • Low-contention only → high contention → high abort rate → use Lua instead

Pitfalls

  • No rollback → failed EXEC does nothing (design choice)
  • High contention → thrashing → prefer Lua scripts
  • WATCH scope: only keys, not values

Quick Summary – Transactions Cheat Sheet

  • MULTI → start queue
  • EXEC → run all or nothing
  • DISCARD → abort queue
  • WATCH key… → optimistic lock
  • Use for: simple atomic batches
  • Prefer Lua for: complex logic / high contention

13. Persistence & Configuration Essentials

RDB — point-in-time snapshot (fork + write file)

CONFIG SET save "900 1 300 10 60 10000"   # defaults
SAVE / BGSAVE
Enter fullscreen mode Exit fullscreen mode

AOF — append-only file (every write logged)

CONFIG SET appendonly yes
CONFIG SET appendfsync everysec   # everysec | always | no
Enter fullscreen mode Exit fullscreen mode

Hybrid (recommended 2025+): RDB + AOF

Config key settings

CONFIG GET maxmemory*
CONFIG SET maxmemory 4gb
CONFIG SET maxmemory-policy allkeys-lru   # volatile-lru, allkeys-random, etc.
Enter fullscreen mode Exit fullscreen mode

Pitfalls

  • No persistence → data loss on crash
  • AOF always → slow on high write
  • Large RDB → long fork time → latency spikes

14. Performance & Memory Pitfalls

Common killers

  • KEYS * / SMEMBERS / HGETALL on large structures → O(n) → block server → use SCAN / SSCAN / HSCAN
  • Huge strings/lists → memory fragmentation
  • Many small keys → key overhead (~64–100 bytes per key)
  • No maxmemory-policy → OOM killer
  • Hot keys → single-thread bottleneck
  • Large ziplists/hashtables crossing thresholds → conversion latency

Monitoring musts

INFO memory
INFO stats
INFO replication
SLOWLOG GET 10
LATENCY DOCTOR
Enter fullscreen mode Exit fullscreen mode

Tips 2026

  • Use hash-tags {tag} for locality in cluster
  • Pipeline commands (client-side)
  • Lua for hot paths
  • Monitor evicted / fragmented memory

15. Popular Modules (JSON, Search, TimeSeries)

RedisJSON (RedisJSON / ReJSON)

JSON.SET user:100 $ '{"name":"Kamran","age":30}'
JSON.GET user:100 $.name
JSON.NUMINCRBY user:100 $.age 1
Enter fullscreen mode Exit fullscreen mode

RediSearch (full-text search + vectors)

FT.CREATE idx ON HASH PREFIX 1 doc: SCHEMA title TEXT body TEXT price NUMERIC
FT.ADD idx doc:1 1.0 FIELDS title "Redis Guide" body "..."
FT.SEARCH idx "guide" LIMIT 0 10
Enter fullscreen mode Exit fullscreen mode

RedisTimeSeries (metrics, IoT)

TS.CREATE temp:room LABELS sensor room1
TS.ADD temp:room * 23.5
TS.RANGE temp:room - + AGGREGATION AVG 60000
Enter fullscreen mode Exit fullscreen mode

These require Redis Stack / modules loaded.


Redis Usage With Node. App

Which Client Library to Use in 2026?

Two main contenders exist:

  • @redis/client (node-redis v4+) — Official recommendation from Redis for new projects (MIT license, actively maintained, full support for Redis 8+, Redis Stack modules like JSON/Search/TimeSeries, Promise-first API, clean modern syntax).
  • ioredis — Battle-tested, excellent for Cluster/Sentinel, auto-reconnect, used in many legacy/high-scale systems (BullMQ, etc.), but considered legacy for new work.

Recommendation (Feb 2026): Use @redis/client for new projects. It supports the newest features (RESP3, hash-field expiration, modules) and has better long-term support.

Install:

npm install redis
# or
yarn add redis
Enter fullscreen mode Exit fullscreen mode

We'll use @redis/client (v4+) in all examples below.

1. Connecting to Redis

Basic connection (localhost:6379)

import { createClient } from "redis";

const client = createClient({
  url: "redis://localhost:6379", // or redis://:password@host:port/db
  // socket: { reconnectStrategy: retries => Math.min(retries * 50, 500) }
});

client.on("error", (err) => console.error("Redis Client Error", err));

await client.connect();

// Graceful shutdown
process.on("SIGINT", async () => {
  await client.quit();
  process.exit(0);
});
Enter fullscreen mode Exit fullscreen mode

With password / TLS / Sentinel / Cluster → see official docs.

2. Strings – Caching, Counters, Locks

// Basic set / get
await client.set("user:kamran:name", "KamraAn");
const name = await client.get("user:kamran:name"); // 'KamraAn'

// With expiration (recommended for sessions/caches)
await client.set("session:abc123", "user100", { EX: 3600 }); // 1 hour

// Atomic counters
await client.set("views", "0");
await client.incr("views"); // → "1"
await client.incrBy("views", 15); // → "16"
await client.decrBy("views", 5); // → "11"

// Float counters
await client.set("pi", "3.14");
await client.incrByFloat("pi", 0.01); // → "3.15"

// Rate limiting example (simple fixed window)
const userId = "kamran";
const today = new Date().toISOString().slice(0, 10);
const rateKey = `rate:${userId}:${today}`;

await client.incr(rateKey);
await client.expire(rateKey, 86400); // 24h if first set

const calls = await client.get(rateKey);
if (Number(calls) > 100) {
  throw new Error("Rate limit exceeded");
}
Enter fullscreen mode Exit fullscreen mode

3. Lists – Queues, Recent Items

// Queue (FIFO: RPUSH + LPOP or BRPOP)
await client.rPush("jobs", "render:video:uuid123");
const job = await client.bLPop("jobs", 0); // blocking wait forever → { key: 'jobs', element: 'render:video:uuid123' }

// Recent items (capped log)
await client.lPush("events", "user:login:kamran");
await client.lTrim("events", 0, 999); // keep newest 1000

// Stack (LPUSH + LPOP)
await client.lPush("undo", "action1");
const last = await client.lPop("undo");
Enter fullscreen mode Exit fullscreen mode

4. Hashes – Objects / Profiles / Carts

await client.hSet("user:100", {
  name: "KamraAn",
  age: "30",
  city: "Khost",
});

const user = await client.hGetAll("user:100"); // { name: 'KamraAn', age: '30', city: 'Khost' }

await client.hIncrBy("user:100", "visits", 1);
await client.hIncrByFloat("cart:kamran", "total", 49.99);

await client.hDel("user:100", "city");
Enter fullscreen mode Exit fullscreen mode

5. Sets – Unique Items, Tags, Intersections

await client.sAdd("tags:kamran", ["redis", "nodejs", "cache"]);
await client.sIsMember("tags:kamran", "redis"); // 1

const mutual = await client.sInter(["friends:kamran", "friends:ali"]); // mutual friends
await client.sUnionStore("all:tags", "tags:kamran", "tags:old");
Enter fullscreen mode Exit fullscreen mode

6. Sorted Sets – Leaderboards, Priorities

await client.zAdd("leaderboard:2026-02", [
  { score: 1900, value: "kamran" },
  { score: 1500, value: "ali" },
]);

await client.zIncrBy("leaderboard:2026-02", 100, "kamran"); // 2000

const top10 = await client.zRange("leaderboard:2026-02", 0, 9, {
  WITHSCORES: true,
});
// → [ { value: 'kamran', score: '2000' }, ... ]

const rank = await client.zRevRank("leaderboard:2026-02", "kamran"); // 0 = #1
Enter fullscreen mode Exit fullscreen mode

7. Streams – Durable Queues & Event Logs

// Produce
const id = await client.xAdd("mystream", "*", {
  event: "login",
  user: "kamran",
  ip: "203.0.113.5",
});

// Consume with group (at-least-once)
await client.xGroupCreate("mystream", "workers", "$", "MKSTREAM");

const messages = await client.xReadGroup(
  "workers",
  "worker-1",
  [{ key: "mystream", id: ">" }], // new messages
  { COUNT: 10, BLOCK: 5000 },
);

// Ack
if (messages?.[0]?.messages) {
  for (const msg of messages[0].messages) {
    await client.xAck("mystream", "workers", msg.id);
  }
}

// Pending / claim stalled
const pending = await client.xPending("mystream", "workers");
Enter fullscreen mode Exit fullscreen mode

8. Pub/Sub – Real-time

// Publisher
await client.publish("chat:room1", "KamraAn joined from Khost");

// Subscriber (dedicated connection recommended)
const subClient = client.duplicate();
await subClient.connect();

await subClient.subscribe("chat:room1", (message) => {
  console.log("Received:", message);
});

// Pattern subscribe
await subClient.pSubscribe("chat:room:*", (message, channel) => {
  console.log(channel, message);
});
Enter fullscreen mode Exit fullscreen mode

9. Transactions + WATCH (Optimistic Locking)

await client.watch("balance:kamran", "spent:kamran");

const multi = client.multi();
multi.decrBy("balance:kamran", 100);
multi.incrBy("spent:kamran", 100);

const result = await multi.exec(); // null if watched keys changed → retry
Enter fullscreen mode Exit fullscreen mode

10. Lua Scripting – Atomic Custom Logic

const sha = await client.scriptLoad(`
  local current = redis.call('GET', KEYS[1]) or '0'
  local updated = current + ARGV[1]
  redis.call('SET', KEYS[1], updated)
  return updated
`);

const newValue = await client.evalSha(sha, {
  keys: ["counter"],
  arguments: ["5"],
});
Enter fullscreen mode Exit fullscreen mode

11. Best Practices & Pitfalls (Node.js Context)

  • Use one connection per process (or pool via library if needed)
  • Pipeline heavy batches: client.multi().set().get().exec() or .pipeline()
  • Always SCAN instead of KEYS */ SMEMBERS / HGETALL on large collections
  • Handle reconnects & backpressure
  • Monitor memory (INFO memory) & set maxmemory-policy allkeys-lru
  • For high contention → prefer Lua over MULTI/WATCH
  • Use Redis Stack modules → JSON.GET / FT.SEARCH / TS.ADD (via same client)

Quick Summary – Node.js + Redis Cheat Sheet

  • Connect: createClient({ url }) → .connect()
  • Strings: set / get / incrBy / incrByFloat / set(..., { EX })
  • Lists: rPush / lPop / bLPop / lTrim
  • Hashes: hSet / hGetAll / hIncrBy
  • Sets: sAdd / sInter / sUnionStore
  • Sorted Sets: zAdd / zRange / zIncrBy / zRevRank
  • Streams: xAdd / xGroupCreate / xReadGroup / xAck
  • Pub/Sub: publish / subscribe / pSubscribe
  • Transactions: multi() → .exec()
  • Lua: scriptLoad / evalSha
  • Watch: .watch(keys) → multi() → exec()

Now this covers everything from basic CRUD to advanced production patterns in Node.js.

Top comments (0)