In a previous post, we measured that compressing large JSON documents before sending them to redis was faster than sending them as is. I made the measurement on my own computer and a local redis database.
Now that the principle has been validated, I needed to know if this result could be replicated on an environment that is closer to the production environment we have at Forest Admin.
On a server, I ran the same benchmark:
- Server performance is the same as in production (except for the load induced by other requests)
- The redis server is comparable to the one that is used in production.
⬇️ Download speed comparison
In this first graph, we will compare the download performance from redis + decompression time of the same JSON documents, using 3 different methods:
- Uncompressed JSON document
- Compressed brotli-1
- Compressed gzip-3
- Compressed deflate-3
These algorithms appeared to be the fastest in their family during my first tests. This second test on a production-like environment revealed the same, that's why I decided not to publish the same level of info as the last time.
Performances are really similar to documents smaller than 4 MB, but the difference starts to be significative after this value.
For larger documents, all compression methods have similar performance regarding the time it takes to download and decompress them.
⬆️ Upload speed comparison
The same protocol has been applied, for all algorithms. As for the download performance comparison, we will compare here the same challengers that have been selected in the previous test.
When writing JSON documents larger than 4 MB, results show that it is worth compressing them. For documents with a size of 10 MB, the overall compression and upload time can almost be divided by 2 with this method, from less than 300 ms to 150 ms with brotli-1.
🏅 Brotli-1: the winner in production-like environment
This second test shows that the algorithm brotli-1 makes a significant difference regarding the upload and download performance of documents larger than 4 MB.
At Forest Admin, we need to store some JSON documents that are larger than 4 MB, and we will definitely try this solution in production, using a canary deployment.
The space saving ratio of the brotli-1 algorithm has been measured to be more than 90%, on the types of documents that we are storing. So, in addition to allowing faster data transfer, this solution will also save a lot of space on our redis instances.
⚠ Be sure to test the algorithm on a production-like environment
Be careful before using this solution in your own environment, because results can vary a lot with:
- the type of document
- the compression algorithm
- the compression level
For instance, brotli
with the default compression level is very effective in terms of space saving, but also very slow during the compression phase.
When testing in a production-like environment, some algorithms have also been measured as slower than the original solution, which consists of sending the plain JSON documents, as you can see below.
Compressing documents with deflate and a compression level of 0 was faster than using no compression on my laptop, but became slower on a real server.
Top comments (0)