DEV Community

kination
kination

Posted on

What is QUIC protocol, and how to use through Rust

Progressing through HTTP/1 and HTTP/2, HTTP/3 is now widely expanding in the data transmission domain. About 30% of websites worldwide have adopted HTTP/3, and QUIC, the underlying protocol for HTTP/3, is used by 8.0% to 8.2% of all websites globally. It has been first developed by Google, and now standardized by IETF.

So, what is QUIC?

"QUIC = Quick UDP Internet Connection", is one kind of transport layer network protocol. As you can expect on its naming, it is built on top of UDP.

Image description

Zero round-trip time (0-RTT)

Traditionally, data transfer has relied on TCP for establishing communication and ensuring security. However, TCP requires multiple confirmations, such as the '3-way handshake,' which can slow down data transfer speeds.

Unlikely with this, 0-RTT (Zero Round Trip Time) in QUIC significantly reduces connection establishment time, allowing clients to send data immediately without waiting for a handshake to complete, by working in following order:

  • Resumption: 0-RTT allows clients to resume a previous connection using cached parameters9.
  • Immediate Data Transfer: Clients can send application data in the very first roundtrip of the connection1.
  • True Zero Roundtrip: Unlike TLS, which still requires a TCP connection, QUIC achieves real 0-RTT connection establishment1

Image description

0-RTT dramatically speeds up resumed connections, and this establishment leads to a smoother web experience for frequently visited sites.

Avoid head-of-line blocking

Head-of-line(HoL) blocking is one kind of performance issue which occurs during network communication.

In TCP, data is transmitted in stream, and packets must be received + acknowledged in order they were send. If any packet is missing, packets behind this must wait until missing one has been re-transmitted and received. It means all data inside stream is being stopped based on this rule.

Image description

To avoid this, QUIC introduced multiplexing multiple streams. It allows multiple independent streams of data to be sent over single connection. Each of stream are managed separately as in image, so if any packet is missing, it only effects on single stream and not affecting other streams in same connection.
Each of stream has own identifier, allowing protocol to keep track of which packets belong to stream. This enables receiver to process packets from different streams as they arrive without waiting for missing packets from other stream.

How to make QUIC communication in Rust.

To make QUIC server/client communication in Rust, there are 2 popular library quiche and quinn.

QUIC libraries for Rust

quiche is library developed by Cloudflare. Cloudflare are making lots of research about QUIC, and this library also follow latest QUIC specification with strong focus on security and complicance.

quinn is maintained by its open source communication. It is built in pure Rust(quiche internally uses few C code), and having straightforward/friendly API, making accessible for new QUIC developer.

Both are following standard specifications quite rigorously and are actively maintained, so you can confidently use either one. But in this post, I'll try to use s2n-quic, which has been released by AWS.

s2n-quic is part of the s2n library suite and aims to provide a secure and efficient QUIC implementation. While some have commented that it might be too AWS-specific, I haven't found this to be the case in my experience. It was suitable for general use and user-friendly enough to understand easily.

Sample for s2n-quic

Before starting, in the context of QUIC connection, certification file sets(ex> cert.pem, key.pem) is generally required to establish secure connection, because QUIC relies on TLS to provide features like encryption, authentication, and integrity for data in transit.

Here's some starting point.
Server part:

use s2n_quic::Server;
use std::{error::Error, path::Path};

#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
    let mut server = Server::builder()
        .with_tls((Path::new("cert.pem"), Path::new("key.pem")))?
        .with_io("127.0.0.1:4433")?
        .start()?;

    while let Some(mut connection) = server.accept().await {
        tokio::spawn(async move {
            while let Ok(Some(mut stream)) = connection.accept_bidirectional_stream().await {
                tokio::spawn(async move {
                    while let Ok(Some(data)) = stream.receive().await {
                        stream.send(data).await.expect("stream should be open");
                    }
                });
            }
        });
    }

    Ok(())
}
Enter fullscreen mode Exit fullscreen mode
  1. Make server builder
  2. Make new connection from builder, and spawn a new task for this
  3. Spawn a new task for the stream inside connection, to accept data from client

and client part:

use s2n_quic::{client::Connect, Client};
use std::{error::Error, path::Path, net::SocketAddr};

#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
    let client = Client::builder()
        .with_tls(Path::new("cert.pem"))?
        .with_io("0.0.0.0:0")?
        .start()?;

    let addr: SocketAddr = "127.0.0.1:4433".parse()?;
    let connect = Connect::new(addr).with_server_name("localhost");
    let mut connection = client.connect(connect).await?;

    connection.keep_alive(true)?;

    let stream = connection.open_bidirectional_stream().await?;
    let (mut receive_stream, mut send_stream) = stream.split();

    tokio::spawn(async move {
        let mut stdout = tokio::io::stdout();
        let _ = tokio::io::copy(&mut receive_stream, &mut stdout).await;
    });

    let mut stdin = tokio::io::stdin();
    tokio::io::copy(&mut stdin, &mut send_stream).await?;

    Ok(())
}
Enter fullscreen mode Exit fullscreen mode
  1. Make client builder
  2. Open a new stream and split to receiving/sending sides
  3. Spawn a task that copies responses from the server to stdout

Conclusion

QUIC protocol represents a major improvement in internet communication. By offering low-latency data transmission while maintaining security and stability, it has significantly enhanced user experience on web/mobile.
And more, as a data engineer, I'm excited about the potential positive impact this could have, on data streaming systems within data pipelines.

Reference

Top comments (0)