Hello to my fellow developers!
Today, I'm super excited to get to show you a bit of a project I've been working on that may just make it easier for us to work with function outputs that don't necessarily have to be recomputed all the time. It's called semantic_cache, a Python package. It uses Qdrant—a vector database—to cache the function result on semantic similarity.
This might be very helpful particularly if you are handling heavy, repetitive computation tasks in data processing, AI model inference or just about anywhere you think fit!
What's the Big Idea?
In fact, the idea of semantic_cache was born in an effort to avoid reimplementing one or another data processing function that might be quite expensive to compute. Inefficiency is going to happen all over again, when call to that function with same or very similar arguments is reproduced. That's the time semantic_cache comes into play!
For example, using semantic_cache
, we can store the results of such function calls - say, when a new call is made with arguments akin to a certain prior call (based on the similarity threshold), return the result returned from that prior call instead of actually recomputing the result.
How Does it Work?
The package wraps every Python function decorated with its output. Let's break it down on how it works:
-
Initialization: Set up
QdrantClient
to be in charge of communication with the Qdrant database. In this example, for simplicity of explanation, we run an in-memory instance. Decorator Setup:
semantic_cache
is a decorator factory that can be used to specify asimilarity_threshold
under which it determines how close the arguments of two calls need to be for the two calls to be equivalent.-
Caching Mechanism: With a call to the decorator
semantic_cache
, the inner function will be called accordingly with the user input. It will create a string representation for the function name and its arguments, keyword arguments, then check if an existing similarly called function is stored in the Qdrant database.- And if such a similar call is encountered (i.e., the arguments of the existing call are similar to the arguments of the current call beyond the specified threshold) and the result is already cached, then immediately the result is returned.
- If such a call is not found, then it is executed and the result is stored both in the Qdrant database and the local cache dictionary.
Qdrant Integration: Integration with Qdrant allows
semantic_cache
to unleash powerful vector search capabilities that allow for the search of similar function calls with top-class efficiency. Much more advanced than traditional mechanisms for caches that are able to verify only for the exact matches.
Why Qdrant?
I took Qdrant for this project. The reason was that Qdrant stands for high-performance vector search, the best choice against use cases like finding similar function calls based on semantic similarity.
Qdrant is not only powerful but also scalable to support a variety of advanced search features that are greatly useful to nuanced caching mechanisms like ours.
A Real-World Application
Just imagine this being used in a situation like a recommendation system where user inputs may differ slightly but actually lead to similar paths of computation. semantic_cache
helps much and is a huge time saver.
Try it Out!
Do give semantic_cache
a shot. Looks like it has got what it takes. We're also very easy to integrate into existing projects while providing maximal value for your applications when they're involved in heavy computation.
Here is a simple example to start you off:
from semantic_cache import semantic_cache
@semantic_cache(similarity_threshold=0.95)
def expensive_function(data):
# Imagine some heavy processing here return processed_data
Wrapping Up
I'm still very early in the development of this project and open to the community for guidance on how to take it further. If you have any ideas or suggestions or even questions, feel free to write me or even contribute to the project on GitHub.
Happy coding, and with a human touch, let's make the whole thing smarter and more efficient together!
Top comments (0)