Redis is an open source, in-memory data store that can be used as a database, distributed cache and message broker. It supports strings, collections, along with other data structures. Redis allows atomic operations on these types, such as pushing an element to a list.
A common use case for Redis is as a distributed cache in a microservice-oriented architecture. Let's say you have a service that calls a third-party and only so many requests to the third-party can be made before you're rate limited. You can cache the third-party response in-memory, but typically you'll have multiple instances of your service so that doesn't scale. You can use a use a traditional database, but if the third-party data is being served in real time to end users, that increases latency which is bad. This is where Redis comes in. Redis supports lower latency than a traditional database, and also can be the single source of truth for all the instances of your service so there are less calls to the third-party.
There are different types of computer memory. Accessing data in memory eliminates seek time when querying the data. Eliminated seek time provides faster reads than disk. If you can serve more requests using in-memory data that's a better experience for your end user.
An eviction policy is a popular feature in caches. It defines when to remove items from the data store based on what is configured. Redis has 8 different eviction policies to choose from.
You might be thinking, there's too many to choose from, I'll just use the default, but that is not very wise. The default is
noeviction which means that when the memory is full, there will be no evictions and Redis will return an error. This is not ideal in any production scenario. It demonstrates why you should use one of the many other eviction policies which will not interrupt your production systems.
volatile means only keys with an
expire set will be evicted. So you need to explicitly prepare entries for eviction. On the other hand,
allkeys means that all keys are eligible for eviction.
lru means that when the memory limit is reached, the last accessed items will be deleted.
lfu will try to track the frequency of access of items, so that items that are used rarely are evicted while the ones used often have an higher chance of remaining in memory.
The recommended eviction policy is
allkeys-lru. This is an official recommendation by Redis.
|noeviction (default)||Returns an error if the memory limit has been reached when trying to insert more data|
|allkeys-lru (recommended if unsure)||Evicts the least recently used keys out of all keys|
|allkeys-lfu||Evicts the least frequently used keys out of all keys|
|allkeys-random||Randomly evicts keys out of all keys|
|volatile-lru||Evicts the least recently used keys out of all keys with an “expire” field set|
|volatile-lfu||Evicts the least frequently used keys out of all keys with an “expire” field set|
|volatile-random||Randomly evicts keys with an “expire” field set|
|volatile-ttl||Evicts the shortest time-to-live keys out of all keys with an “expire” field set.|
There are a lot more reasons and uses cases for Redis than what I covered. However, I hope this provides an overview of some of what Redis offers and how you can start using it. Remember if data volatility is acceptable in your use case, you should use
allkeys-lru. Do you currently use Redis? Let me know your thoughts down below.
If you like what you've read, want to continue the discussion, or have anything else on your mind, reach out to me on Twitter.