One of the most significant performance improvements I’ve seen in a micro-services platform was moving from 🐢 HTTP/1.1 to 🚀 HTTP/2. But that change also brought a lot of complexity.
Why is HTTP/2 faster? 🤔
In HTTP/1.1, when you make a request, a connection is opened, your GET, POST, etc. is made, and you get a response. You can reuse connections (if supported), but requests can only be made synchronously on a connection, one request per connection at a time.
However, with HTTP/2, requests are asynchronous, which means the client can make multiple requests at a time and doesn’t have to wait for a response before sending the subsequent request.
Moving from synchronous to asynchronous communications is a considerable performance advantage.
The advantages include:
- Reduced time waiting for previous requests to finish
- Connection reuse, eliminating TCP and TLS handshake time
But, HTTP/2 adds complexity with Load Balancing. ⚠️
When you use a protocol like HTTP/2, where requests can be multiplexed across a single connection, traditional load balancers that perform layer 4 (connection-based) load balancing do not adequately distribute load.
Load balancers like Kube-proxy, for example.
When only the connections are load-balanced, HTTP/2 might send all requests to a single system.
There are many ways to address this, but the most common is to move away from connection-based load balancing and leverage a service mesh that load-balances at layer 7 (per request).
But this is yet another system to manage and upkeep.
Moving from HTTP/1.1 to HTTP/2 can be a great advantage, but it comes with a price.

Top comments (0)