DEV Community

Cover image for The Evolution of HTTP: From HTTP/1.1 to HTTP/2 to HTTP/3
Faizan Firdousi
Faizan Firdousi

Posted on

The Evolution of HTTP: From HTTP/1.1 to HTTP/2 to HTTP/3

The Evolution of HTTP: From HTTP/1.1 to HTTP/2 to HTTP/3

We all know that the internet is evolving rapidly, and users increasingly expect everything to load fast. Website loading times have evolved significantly over time, driven by improvements in how browsers retrieve HTML webpages from servers.

In this blog, I'll discuss everything I've learned about HTTP and its evolution from HTTP/1.1 to HTTP/2, with HTTP/3 still on the horizon for future exploration.

As we know, everything operates on a client-server model. The more efficient we can make communication between both parties, the greater the benefits we'll achieve.

Quick Overview of How HTTP Works

When you enter a website URL in your browser, it sends a GET request to the server, and the server responds with the HTML file that you see displayed.

In 1996, HTTP/1.0 was released with a simple design. The client sends a request line:

GET /index.html HTTP/1.0
Enter fullscreen mode Exit fullscreen mode

The server responds with a status line and headers:

HTTP/1.0 200 OK
Content-Type: text/html
Content-Length: 1234
Enter fullscreen mode Exit fullscreen mode

While simple, HTTP/1.0 had significant limitations. Remember that HTTP is an application layer protocol that uses TCP underneath, and TCP is inherently slow. In HTTP/1.0, every GET request required establishing a new TCP connection, along with a new TLS handshake for each request. This made performance quite poor. Additionally, there was no TLS defined in the specification itself, SSL/TLS existed but only as a separate convention.

HTTP/1.1

HTTP/1.1 was introduced in 1997 and later refined in 1999. It proved so robust that it survived for over 15 years.

HTTP/1.1 was a much-improved version of HTTP/1.0. Unlike its predecessor, HTTP/1.1 implemented persistent connections by default. This meant one TLS handshake per session, allowing multiple requests and responses per connection. The Connection: keep-alive header became implicit, representing a huge performance improvement.

Advanced caching was another major feature introduced in HTTP/1.1.

While TLS remained a separate convention in HTTP/1.1, it matured significantly over time. By the late 1990s, SSL had evolved into TLS and was designed to work seamlessly with HTTP/1.1.

However, as users, we're always seeking better performance...

HTTP/2

After HTTP/1.1 had been widely used for many years, HTTP/2 was eventually released to address its remaining shortcomings. While HTTP/1.1 was a major improvement over HTTP/1.0 due to its persistent TCP connections, allowing multiple requests and responses over the same connection rather than opening new ones for each resource ,it still had a serious limitation.

The design still suffered from head-of-line blocking: all responses on a single connection had to be returned in order. If one response was delayed, every other response behind it had to wait. To work around this, web browsers typically opened several parallel TCP connections (usually around six per domain) to fetch multiple resources simultaneously. While this approach helped, it was inefficient because it required extra TCP handshakes, redundant TLS negotiations, and increased network congestion.

HTTP/2 solved these issues by introducing multiplexing, where multiple requests and responses can truly share a single TCP connection simultaneously without blocking each other.

How Multiplexing Works

In HTTP/2, you don't need to spin up multiple TCP sockets like in HTTP/1.1. Instead, you have one single TCP connection, and within that pipe, you can send all your requests together simultaneously to the server.

When mixing multiple requests in the same channel, you need a way to track which chunk belongs to which request. That's where Stream IDs come in. Every request the client makes gets tagged with a unique stream ID, and the server does the same when sending responses. Even though all data is interleaved on the wire, both sides can reassemble it correctly.

Under the hood, HTTP/2 also abandoned the old text-based format and uses a binary framing layer. This means requests and responses are broken into smaller frames (tiny packets of data) that can be interleaved. For example, frames for request A, request B, and request C can travel in mixed order like [A1][B1][A2][C1][B2][A3]. Upon arrival, the receiver sorts them back based on the stream IDs.

With this design, one slow response no longer holds up others. Additionally, HTTP/2 compresses repeated headers like cookies and user-agent strings, making it significantly faster and more efficient than HTTP/1.1.

HTTP/3

But wait, we're still not satisfied! Even though HTTP/2 solved many problems, it still had one fundamental issue lurking underneath , head-of-line blocking at the TCP layer itself.

You see, while HTTP/2 eliminated head-of-line blocking at the HTTP layer through multiplexing, it still relied on TCP as its transport protocol. TCP guarantees that packets arrive in order, which sounds great in theory, but it means that if a single TCP packet gets lost somewhere in the network, every single HTTP/2 stream has to wait for that packet to be retransmitted and received before any of them can proceed. So in some cases, this made HTTP/2 even slower than HTTP/1.1!

This is where HTTP/3 comes in as a revolutionary solution. Instead of trying to patch TCP, HTTP/3 said "forget TCP entirely" and built itself on top of QUIC (Quick UDP Internet Connections), which runs over UDP.

What Makes HTTP/3 Different

HTTP/3 uses QUIC as its transport protocol, and QUIC is fundamentally different from TCP. While TCP establishes one ordered stream of data, QUIC creates multiple independent streams within a single connection. If one stream loses a packet, only that specific stream needs to wait for retransmission, all other streams can continue flowing normally.

Think of it like this: imagine TCP as a single-lane tunnel where one broken-down car blocks all traffic behind it. QUIC is like a multi-lane highway where each lane operates independently, if there's an accident in one lane, traffic in other lanes keeps moving smoothly.

Key Features of HTTP/3

Faster Connection Setup: One of the biggest wins with HTTP/3 is connection establishment speed. Traditional HTTP/2 requires separate handshakes for TCP and TLS, which typically takes 2-3 round trips before any actual data can be sent. HTTP/3 combines transport and encryption into a single handshake, reducing connection setup to just one round trip. Even better, for returning visitors, HTTP/3 supports 0-RTT (zero round-trip time) resumption, meaning requests can be sent immediately without any handshake delay.

True Multiplexing Without Blocking: While HTTP/2 implements multiplexing on top of TCP, HTTP/3 leverages QUIC's native multiplexing capabilities. Each resource (like CSS, JavaScript, images) downloads on its own independent stream. If your cat photo loses a packet, your CSS and JavaScript files continue downloading uninterrupted.

Connection Migration: This is huge for mobile users. With HTTP/2, when you switch from Wi-Fi to cellular data, your connection breaks and has to be completely re-established. HTTP/3 supports connection migration the same connection seamlessly continues when switching networks. Your downloads don't get interrupted when you walk out of Wi-Fi range.

Built-in Security: Unlike previous versions where TLS was layered on top, HTTP/3 has TLS 1.3 encryption built directly into QUIC. This eliminates redundant handshakes and provides enhanced security by default.

How HTTP/3 is Currently Used

As of 2025, HTTP/3 adoption has been growing rapidly. According to recent data, over 95% of major browsers now support HTTP/3, and it's being used by approximately 26-34% of the top websites worldwide. Major players like Google, Facebook, Cloudflare, and YouTube have already implemented HTTP/3.

Chrome enabled HTTP/3 by default back in 2020, Firefox followed in 2021, and Safari has support but keeps it disabled by default (typical Apple behavior, honestly). On the server side, web servers like LiteSpeed, Nginx, and even Microsoft IIS have working HTTP/3 implementations.

The interesting thing is that you've probably already used HTTP/3 without realizing it. If you're using Chrome and visiting Google services, YouTube, or any Cloudflare-protected site, you're likely already experiencing HTTP/3's benefits.

However, adoption isn't universal yet. Many smaller websites and older infrastructure still haven't made the switch, partly because the performance benefits are most noticeable for high-traffic sites or users on unreliable networks. For simple websites that are already "good enough," the complexity of upgrading might not seem worth it.

The Beautiful Evolution of Internet Infrastructure

Isn't it fascinating how something as simple as loading a webpage has evolved into such a sophisticated dance of protocols and optimizations?

When you hit Enter after typing a URL, you're not just requesting a simple HTML file anymore. Behind that seemingly simple action, there's this beautiful orchestration of technology that has evolved over decades. Your browser is making intelligent decisions about which protocol version to use, establishing encrypted connections in milliseconds, multiplexing dozens of resources simultaneously across independent streams, and seamlessly handling network switches, all to get that webpage to you as fast as possible.

What started as a simple request-response pattern in HTTP/1.0 has evolved into this incredibly efficient system where your browser can download images, stylesheets, scripts, and fonts all at once, without any of them blocking each other, over a single connection that maintains itself even when you switch from Wi-Fi to cellular data.

The journey from HTTP/1.1's persistent connections to HTTP/2's multiplexing to HTTP/3's QUIC-based streams represents decades of engineers identifying bottlenecks and innovating solutions. Each generation didn't just add features, it fundamentally reimagined how data should flow across the internet.

And the best part? This entire evolution has been largely invisible to users. The same simple action of clicking a link or typing a URL now delivers an exponentially better experience, thanks to all this networking magic happening behind the scenes. It's a testament to how the internet continues to evolve and optimize itself while maintaining the simplicity that makes it accessible to everyone.

That's the beauty of internet evolution, constant improvement in the background, making every webpage load a little bit faster, a little bit more reliable, and a little bit more secure than before.

Top comments (0)