Photo by Nik
Making efficient use of the network is essential.
Efficient applications work faster and smoother, even in low network conditions.
Efficiency can be achieved in two stages:
- Design time
- Runtime
You might not want to mess with the design part if the application is already designed and actively running.
Then, there is this approach with application development: you should not optimize early.
Runtime optimizations can reduce network usage to a desired level. If that is the case, optimizing the design can be considered an early optimization.
Design Time Optimizations
Design time optimizations for network usage relate to the overall design of how the application interacts with network resources.
For example, using GraphQL can reduce network usage for many applications. The reason is, GraphQL allows applications to fetch only the required amount of data and nothing else.
Some applications access the backend via BFF - *Backend for Frontend. *This can also help reduce network usage from the application point of view. The BFF can aggregate data from multiple servers and serve it to the application through a single API call.
These are some of the optimizations you can consider to reduce network usage in the design stage.
- Going beyond the REST, and creating optimized API endpoints
- Keeping request/response schemas as little as possible
- Using efficient data structures - such as protocol buffers
Runtime Optimizations
Runtime optimizations relate to the smartness of the applications.
For example, a smart application shouldn't request the same resource twice. This can be achieved by implementing caching strategies
Caching is one of the most popular solutions to reduce network usage and network latency as well.
Despite its popularity, there is no ready-to-go solution for caching. Applications should use the correct caching strategies depending on the use case. In fact, caching is one of the hardest things in software development. Remember the famous quote:
There are only two hard things in computer science: cache invalidation and naming things. - Phil Karlton
With the increasing popularity of reactive programming, there are some clever tricks to reduce the number of network calls.
For example, imagine the application has a search box that shows the results in real-time. Instead of sending search requests after every time a letter is typed, the applications use a **debounce **operator. Debounce operator sends a search request after the user stops typing.
The Simple Trick: Compression
Wait, did you make me read the article just to let me know about compression?
Don't get frustrated, I have a good reason to do so! Besides, I talk about other things, too. Bear with me 🧸
You might be using compression wrong.
Compression is very simple to use because there are already well-known algorithms you can use out of the box.
The problem is compressing the encrypted data.
You should never compress the encrypted data, because you simply can't. The compressed data is hardly smaller than the original and the applications waste CPU cycles for compression 😮.
Instead, you should always encrypt the compressed data.
How to Tell If My Application Make It Right?
Simply, toggle the compression on / off and compare the payload sizes. If the payload sizes are very close, then your application probably encrypts first.
Payload Comparison: Compress First vs. Encrypt First
To make comparison easier, I created a small example repository.
The example uses aes-256-cbc
as the encryption method and brotli
as the compression algorithm. You can also check the example in GitHub.
[ resource ] http://jsonplaceholder.typicode.com/albums?userId=1
[ original ] 816
[ encryptFirst ] 746
[ compressFirst ] 244
[ % difference ] 67.29 reduction
[ resource ] http://jsonplaceholder.typicode.com/albums
[ original ] 9333
[ encryptFirst ] 7472
[ compressFirst ] 1723
[ % difference ] 76.94 reduction
[ resource ] http://jsonplaceholder.typicode.com/photos
[ original ] 1071472
[ encryptFirst ] 820717
[ compressFirst ] 98447
[ % difference ] 88.00 reduction
Why Compression Doesn't Work After Encryption?
The reason is simple. Compression algorithms try to represent the same data with fewer bytes.
They first try to identify repeating patterns or some kind of order in the original data. If the original data contains the word lorem ipsum
5 times, the compressed data replaces all recurring occurrences with references to the first occurence.
The responsibility of encryption algorithms is to break recognizable patterns in the original data. They scrumble the data so that it can't be understood by external actors. They create a high entropy representation of the original data, in cryptography terms.
Essentially, an encrypted data loses its recognizable patterns and it becomes harder to compress.
A real life equivalent could be a shopping list.
- Egg
- Egg
- Egg
- Egg
- Egg
- Milk
You can easily remember this list easily, because it comes down to:
- 5 x Egg
- Milk
But that is not possible with the following list:
- Egg
- Milk
- Bread
- Apples
- Lemons
- Chocolate
Even though there are same amount of items in both lists, the second list can't be compressed. That is what encryption does to compression.
Further Reading
Check out these articles for further reading:
- https://www.geeksforgeeks.org/difference-between-data-encryption-and-data-compression/
- https://www.encryptionconsulting.com/education-center/encryption-and-compression/
As an additional note, there are some security concerns regarding compression. But that is out of scope of this post.
Top comments (0)