Hi there! I'm Shrijith Venkatrama, founder of Hexmos. Right now, I’m building LiveAPI, a first of its kind tool for helping you automatically index API endpoints across all your repositories. LiveAPI helps you discover, understand and use APIs in large tech infrastructures with ease.
Redis is known as a fast in-memory data store, often used for caching and session management. But it has a lot more to offer. Beyond the basics, Redis packs some lesser-known features that can solve tricky problems and make your applications more powerful. This post dives into these hidden capabilities with practical examples and details to help you level up your Redis game.
Let’s explore 7 features that don’t get enough spotlight, complete with code, outputs, and tables where they fit.
1. Pub/Sub Messaging for Real-Time Communication
Redis isn’t just a key-value store; it’s a lightweight publish/subscribe (Pub/Sub) system. This lets you build real-time features like chat systems or live notifications. Clients can subscribe to channels, and publishers send messages to those channels without worrying about who’s listening.
Why it’s cool: It’s simple, fast, and scales well for real-time apps. You don’t need a dedicated message broker like RabbitMQ for smaller use cases.
Example: Building a Simple Chat System
Here’s a Node.js example using the ioredis
library to create a basic chat system.
// publisher.js
const Redis = require('ioredis');
const redis = new Redis();
async function publishMessage(channel, message) {
await redis.publish(channel, message);
console.log(`Published "${message}" to ${channel}`);
}
publishMessage('chat:room1', 'Hello, everyone!');
// Output: Published "Hello, everyone!" to chat:room1
// subscriber.js
const Redis = require('ioredis');
const redis = new Redis();
redis.subscribe('chat:room1', (err, count) => {
if (err) {
console.error('Failed to subscribe:', err);
} else {
console.log(`Subscribed to ${count} channel(s)`);
}
});
redis.on('message', (channel, message) => {
console.log(`Received "${message}" from ${channel}`);
});
// Output: Subscribed to 1 channel(s)
// Received "Hello, everyone!" from chat:room1
Run subscriber.js
first, then publisher.js
. The subscriber listens to chat:room1
and logs any messages sent to it.
Key Details:
- Pub/Sub is fire-and-forget; messages aren’t stored.
- Use
PSUBSCRIBE
for pattern-based subscriptions (e.g.,chat:*
). - Great for ephemeral data but not for persistent messaging.
2. Lua Scripting for Atomic Operations
Redis supports Lua scripting, letting you run custom scripts atomically on the server. This is huge for complex operations that need to stay consistent without multiple round-trips to the server.
Why it’s cool: You can combine multiple Redis commands into one script, reducing latency and ensuring no other client interrupts your operation.
Example: Incrementing a Counter with Limits
Here’s a Lua script to increment a counter but cap it at 10.
-- max_counter.lua
local key = KEYS[1]
local current = tonumber(redis.call('GET', key) or 0)
if current < 10 then
redis.call('INCR', key)
return current + 1
else
return current
end
Load and run it with Python using redis-py
:
# run_counter.py
import redis
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
# Load the script
with open('max_counter.lua') as f:
script = r.register_script(f.read())
# Run the script
result = script(keys=['counter'], args=[])
print(f"Counter value: {result}")
# Output: Counter value: 1 (first run)
# Counter value: 2 (second run)
# ...
# Counter value: 10 (after 10 runs, stays at 10)
Key Details:
- Scripts are atomic and block other clients during execution.
- Use
EVAL
orEVALSHA
to run scripts (EVALSHA
is more efficient). - Debug with
SCRIPT DEBUG YES
for development.
3. Streams for Log-Like Data Processing
Redis Streams, introduced in Redis 5.0, are a powerful way to handle log-like data. They’re like a lightweight Kafka, letting you append events and consume them with consumer groups.
Why it’s cool: Streams support persistent, ordered data with consumer groups for scalability and fault tolerance.
Example: Tracking User Actions
Here’s how to use Streams to log and read user actions in Python.
# stream_example.py
import redis
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
# Add actions to a stream
r.xadd('user_actions', {'user_id': '123', 'action': 'login'})
r.xadd('user_actions', {'user_id': '123', 'action': 'view_page'})
r.xadd('user_actions', {'user_id': '456', 'action': 'logout'})
# Read the stream
actions = r.xrange('user_actions', '-', '+')
for action_id, fields in actions:
print(f"ID: {action_id}, Fields: {fields}")
# Output: ID: 1698192001234-0, Fields: {'user_id': '123', 'action': 'login'}
# ID: 1698192001235-0, Fields: {'user_id': '123', 'action': 'view_page'}
# ID: 1698192001236-0, Fields: {'user_id': '456', 'action': 'logout'}
Key Details:
- Use
XADD
to append,XREAD
orXRANGE
to read. - Consumer groups (
XGROUP
) allow multiple consumers to process different messages. - Streams are persistent until explicitly deleted.
4. HyperLogLog for Approximate Counting
Redis’ HyperLogLog (HLL) is a probabilistic data structure for counting unique items (like unique visitors) with minimal memory. It’s perfect when exact counts aren’t critical.
Why it’s cool: HLL uses ~12KB to count millions of unique items with ~2% error.
Example: Counting Unique Visitors
Here’s a Node.js example to track unique visitors.
// hll_example.js
const Redis = require('ioredis');
const redis = new Redis();
async function trackVisitors() {
await redis.pfadd('unique_visitors', 'user1', 'user2', 'user3');
await redis.pfadd('unique_visitors', 'user2', 'user4'); // user2 is a duplicate
const count = await redis.pfcount('unique_visitors');
console.log(`Unique visitors: ${count}`);
// Output: Unique visitors: 4
}
trackVisitors();
Key Details:
- Commands:
PFADD
to add items,PFCOUNT
to get the count. - Merge multiple HLLs with
PFMERGE
. - Error rate is low but not zero; don’t use for billing.
Command | Purpose | Example Output |
---|---|---|
PFADD |
Add items to HLL | 1 (if new items added) |
PFCOUNT |
Get approximate unique count | 4 (for above example) |
5. Geospatial Indexes for Location-Based Queries
Redis’ geospatial indexes let you store and query geographic data (like latitude/longitude). It’s great for location-based features like finding nearby users or stores.
Why it’s cool: Fast and simple for proximity searches without a dedicated GIS database.
Example: Finding Nearby Coffee Shops
Here’s a Python example to store and query coffee shop locations.
# geo_example.py
import redis
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
# Add coffee shops with coordinates
r.geoadd('coffee_shops', (13.361389, 38.115556, 'Shop1'), (15.087269, 37.502669, 'Shop2'))
# Find shops within 100km of a point
shops = r.geosearch('coffee_shops', longitude=14.0, latitude=38.0, radius=100, unit='km')
print(f"Nearby shops: {shops}")
# Output: Nearby shops: ['Shop1']
Key Details:
- Use
GEOADD
to store coordinates,GEOSEARCH
orGEORADIUS
to query. - Supports distance calculations with
GEODIST
. - Data is stored in a sorted set under the hood.
6. Probabilistic Data Structures: Bloom Filters
Redis supports Bloom filters (via the RedisBloom module) to check if an item exists in a set with minimal memory. It’s ideal for scenarios like checking if a username is taken.
Why it’s cool: Bloom filters are memory-efficient and fast, with a small chance of false positives.
Example: Checking Username Availability
Assuming RedisBloom is installed, here’s a Python example.
# bloom_example.py
import redis
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
# Add usernames to a Bloom filter
r.execute_command('BF.ADD', 'usernames', 'alice')
r.execute_command('BF.ADD', 'usernames', 'bob')
# Check if usernames exist
exists_alice = r.execute_command('BF.EXISTS', 'usernames', 'alice')
exists_charlie = r.execute_command('BF.EXISTS', 'usernames', 'charlie')
print(f"Alice exists: {exists_alice}, Charlie exists: {exists_charlie}")
# Output: Alice exists: 1, Charlie exists: 0
Key Details:
- Requires RedisBloom module (not in core Redis).
- Commands:
BF.ADD
,BF.EXISTS
. - False negatives are impossible, but false positives can occur.
7. Time Series Data with RedisTimeSeries
RedisTimeSeries (another module) is designed for storing and analyzing time-series data, like sensor readings or stock prices. It’s optimized for high ingestion rates and efficient queries.
Why it’s cool: Handles downsampling, retention policies, and range queries out of the box.
Example: Tracking Temperature Readings
Here’s a Python example using RedisTimeSeries.
# timeseries_example.py
import redis
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
# Create a time series
r.execute_command('TS.CREATE', 'temperature', 'RETENTION', 3600000) # 1 hour retention
# Add temperature readings
r.execute_command('TS.ADD', 'temperature', '*', 22.5)
r.execute_command('TS.ADD', 'temperature', '*', 23.0)
# Query the data
data = r.execute_command('TS.RANGE', 'temperature', '-', '+')
print(f"Temperature readings: {data}")
# Output: Temperature readings: [['1698192001234', '22.5'], ['1698192001235', '23.0']]
Key Details:
- Requires RedisTimeSeries module.
- Commands:
TS.ADD
for data,TS.RANGE
for queries. - Supports aggregation (e.g., average, min, max).
What to Do with These Redis Features
Redis is more than a cache—it’s a Swiss Army knife for developers. Pub/Sub can power real-time apps, Lua scripts ensure atomicity, and Streams handle event logs. HyperLogLog and Bloom filters save memory for counting or membership checks, while geospatial indexes simplify location queries. TimeSeries tackles metrics with ease.
To get started, try these features in a local Redis instance or a cloud service like Redis Enterprise. Experiment with the examples above, and check the linked docs for deeper dives. If you’re building a new project, think about where these capabilities fit—chances are, Redis can simplify your stack.
Top comments (0)