DEV Community

Omkar Bhagat
Omkar Bhagat

Posted on

HTTP Protocols Explained Simply

Think of HTTP (Hypertext Transfer Protocol) as the rules for how a waiter (the Server) delivers food (the Data) to your table (the Browser). Over the years, we’ve gotten much better at designing the restaurant.

First let's see the high level picture of each HTTP protocol:

HTTP/1.0: The "One Trip" Rule

In the early days, the waiter was very inefficient. If you wanted a burger, fries, and a shake, the waiter would walk to the kitchen, grab the burger, bring it to you, and then go home. To get the fries, you had to call him back, he’d go to the kitchen, and then come back again.

The Technical Bit: Every single request required opening a new connection.

The Real-World Feel: Loading a page with 10 images felt like watching a slideshow. You’d see one image pop up, then a pause, then the next.

HTTP/1.1: The "Keep-Alive" Upgrade

We realized that firing the waiter after every dish was silly. In HTTP/1.1, the waiter stays at your table until you're done. However, there’s a catch: he only has two hands. He can bring the burger and the fries, but you have to wait for him to come back before he can bring the shake.

Real Example: If a big, heavy high-resolution banner image is at the top of a website, it "clogs" the pipe. The small text files behind it have to wait for that big image to finish loading before they can move. This is called Head-of-Line Blocking.

HTTP/2: The "Magic Serving Tray"

This is where things got high-tech. Instead of a waiter with two hands, imagine a waiter with a massive tray that has dozens of tiny compartments. He can bring the burger, the fries, the shake, and the napkin all at the same exact time.

Key Features:

Multiplexing: Multiple requests happen at once over one connection. No one has to wait in line.

Server Push: If the waiter knows you always want ketchup with your fries, he’ll just bring it before you even ask.

The Real-World Feel: Websites suddenly felt "snappy." Even heavy sites with lots of icons and scripts started loading almost instantly because everything arrived in one big delivery.

HTTP/3: The "No-Traffic" Route

HTTP/2 was great, but it had a weakness: if the waiter tripped on a rug, he’d drop the whole tray, and everything would stop until he cleaned it up. HTTP/3 uses a new system (called QUIC) that handles errors better. If one fry falls off the tray, he keeps walking and delivers the rest of the meal while someone else grabs a replacement fry.

The Real-World Feel: This is huge for mobile phones. If you're walking out of your house and your phone switches from Wi-Fi to 5G, HTTP/3 handles that handoff without dropping your connection or making you refresh the page.


To understand why 1.1 was an upgrade, you have to see how "polite" (and painfully slow) HTTP/1.0 was.

HTTP/1.0: The "Hang Up" Conversation

Imagine you are at a payphone. You have three things to tell your friend, but the phone automatically cuts off after 30 seconds.

  • Browser: "Can I have index.html?"
  • Server: "Sure, here is index.html."
  • [CONNECTION CLOSED]
  • Browser: (Wait... I need the CSS too! Redialing...) "Can I have style.css?"
  • Server: "Sure, here is style.css."
  • [CONNECTION CLOSED]
  • Browser: (Wait... I forgot the image! Redialing...) "Can I have logo.png?"
  • Server: "Here is logo.png."
  • [CONNECTION CLOSED]

Why this was a nightmare? The most "expensive" part of the internet (in terms of time) isn't sending the data—it's opening the connection.

To open a connection, the browser and server have to do a "Three-Way Handshake":

  • Browser: "Hey, you there?"
  • Server: "Yeah, I'm here. You there?"
  • Browser: "Yeah, let's talk."

In HTTP/1.0, you had to do that handshake for every single file on the page. If a website had 20 small icons, you did 20 handshakes. It was like driving to the grocery store, buying one apple, driving home, and then driving back for a banana.


HTTP/1.1: The Serial Conversation

In 1.1, the conversation is strictly back-and-forth. The browser cannot ask for the next thing until the first thing arrives.

  • Browser: "Can I have index.html?"
  • Server: "Sure, here is index.html."
  • Browser: (Receives file) "Okay, now can I have style.css?"
  • Server: "Sure, here is style.css."
  • Browser: (Receives file) "Great, finally, can I have logo.png?"
  • Server: "Here is logo.png."

The Problem: If index.html is a massive file, the CSS and Image are "blocked" and the screen stays blank for a long time.


HTTP/2: The Mixed Conversation

HTTP/2 introduces Binary Framing. It breaks the files into tiny "chunks" (frames) and mixes them together. The browser sends all requests at once, and the server sends bits of everything back simultaneously.

Browser: "I need index.html, style.css, and logo.png. Send them all!"

Server: "Coming right up. Here is a piece of the HTML... here is a piece of the CSS... here is a piece of the Image... here is the rest of the HTML..."

The Benefit: The browser gets enough "pieces" of the CSS and HTML at the same time to start drawing the page on your screen much faster.


HTTP/3: The Smarter Conversation

HTTP/3 looks like HTTP/2 (everything happens at once), but the "language" it speaks is different. It uses QUIC instead of TCP.

In HTTP/2, if one "piece" of the image gets lost in the mail, the server stops everything to find it. In HTTP/3, the server says, "I lost a piece of the image, but I'll keep sending you the CSS and HTML while I go find it."

Browser: "Send me the files over this new fast lane!"

Server: "Sending! (A packet of data gets lost)... Oh, looks like a packet dropped. I'll re-send that specific one in a second, but keep processing the other files I'm sending you right now."


Let's explore how QUIC is different from TCP in a separate post.

Top comments (0)