TL;DR: HTTP/2 replaces the inefficient six-connection limit of HTTP/1.1 with a single, multiplexed stream. By breaking assets into small, interleaved chunks, it eliminates head-of-line blocking and prevents multiple connections from fighting for bandwidth, allowing browsers to request and download hundreds of files simultaneously without the overhead of repeated TCP handshakes.
We’ve had first HTTP, but what about second HTTP? Back in the "olden days," websites were tiny. You had an HTML file, a CSS file, and maybe a couple of GIFs. For that scale, HTTP/1.1 worked fine. But modern web engineering has moved toward shipping hundreds of small files—JavaScript modules, fragmented CSS, and optimized assets.
When you try to shove 100 files through a protocol designed for five, things get slow. It isn't just about the raw size of the data; it’s about how the protocol manages the "wire" itself. Let's look at why we had to move on to HTTP/2 to build the websites we actually want to build today.
Why is the 6-connection limit a problem for modern sites?
HTTP/1.1 browsers typically limit themselves to six concurrent TCP connections per domain. If your site requires 100 files to render, the browser is forced to download them in batches of six, leaving the remaining 94 files stuck in a queue until a slot opens up.
This isn't just a queuing issue; it's a resource management disaster. Each of those six connections takes time to establish via a TCP handshake. Once they are open, these connections don't cooperate; they actively fight each other for available bandwidth. Instead of one smooth stream of data, you have six competing processes creating noise and congestion on the network. For a site with 100+ assets, this "batching" adds massive latency that purely sequential processing can't overcome.
What happens when a file gets stuck in HTTP/1.1?
In HTTP/1.1, if one file in a connection downloads slowly or gets "stuck," that entire connection is blocked until the transfer completes. This is known as Head-of-Line (HOL) blocking, where a single heavy asset prevents every subsequent file in the queue from moving forward.
If you're down to five active connections because one is hung up on a large image or a slow server response, your throughput drops immediately. There is no way for the browser to say, "Hey, skip that big file and send me the tiny JS snippet instead." The protocol is strictly sequential within those six pipes. If the "head" of the line is blocked, everything behind it stays put.
How does HTTP/2 multiplexing eliminate the queue?
HTTP/2 ignores the six-connection rule and uses one single, high-performance connection to request everything at once. It does this by chopping every file into little chunks and interleaving them, so data for all 100 files starts moving down the wire simultaneously.
Because it’s one connection, we avoid the overhead of multiple handshakes and the bandwidth contention issues where separate connections fight for priority. If one file gets stuck or arrives slowly, it doesn't matter. The browser is already busy receiving the chunks for the other 99 files. Everything gets sent down the wire in parallel and is reconstructed by the browser on the other end.
| Feature | HTTP/1.1 (The Old Way) | HTTP/2 (The Modern Way) |
|---|---|---|
| Concurrency | 6 connections (Max) | 1 connection (Multiplexed) |
| File Handling | Sequential (one at a time per pipe) | Parallel (all at once via chunks) |
| Network Efficiency | High contention, multiple handshakes | Low overhead, optimized bandwidth |
| Failure Mode | HOL blocking stalls the queue | Interleaved frames prevent stalls |
Is setting up one connection really faster than six?
You might think more pipes equal more speed, but in networking, the opposite is often true because of the "slow-start" algorithm in TCP. A single HTTP/2 connection can optimize its throughput faster and more accurately than six competing connections that are constantly triggering congestion control mechanisms.
By moving to HTTP/2, we've stopped trying to hack around the protocol and started using one that understands we’re building sites made of hundreds of files. It’s about getting everything down the wire as fast as possible so the user isn't left staring at a loading spinner while the browser tries to manage a congested queue. Cheers!
FAQ
Do I still need to bundle my files into one giant 'bundle.js' with HTTP/2?
While bundling isn't as critical for bypassing connection limits as it was in HTTP/1.1, it’s still useful for compression efficiency. However, HTTP/2 makes it much more performant to ship unbundled modules, which can lead to better caching strategies.
Does HTTP/2 work over unencrypted connections?
While the spec doesn't strictly require encryption, all major browser implementations (Chrome, Firefox, Safari) only support HTTP/2 over TLS (HTTPS). If you want the speed of Second HTTP, you need an SSL certificate.
How does the browser know how to put the chunks back together?
HTTP/2 uses a framing layer. Each chunk of data is wrapped in a 'frame' that contains a stream identifier. The browser sees these IDs and knows exactly which file each chunk belongs to, allowing it to reassemble the assets perfectly even though they arrived interleaved.
Top comments (0)