DEV Community

Maksim Lanies for Cloudbridge Research

Posted on

Quic-test: an open tool for testing QUIC, BBRv3, and FEC under real-world network conditions

This article was prepared as part of the CloudBridge Research project focused on optimizing network protocols (BBRv3, MASQUE, FEC, QUIC).

Project: github.com/cloudbridge-research/quic-test

Video demo of quic-test

Screenshot of interface

Why we built this

When we began studying the behavior of QUIC, BBRv3, and Forward Error Correction in real networks — from Wi-Fi to mobile networks and regional backbones — we ran into a simple problem: there were almost no tools capable of accurately reproducing real-world network conditions.

You can use iperf3, but it’s about TCP and basic UDP. You can take standalone QUIC libraries, but they lack visualization and load generation. You can write custom simulators, but they do not reflect real channel behavior. Want to test how BBRv3 performs between Moscow and Novosibirsk? Please: find three servers across different datacenters, configure netem, manually collect metrics and hope results will be reproducible.

There was no comprehensive QUIC tester with charts, channel profiles, FEC, BBRv3 support, TUI, and Prometheus metrics.
So we built quic-test — an open tool we use inside CloudBridge Research for all experiments. And now we share it with the community, universities, and engineers.


What is quic-test

A fully open laboratory environment for analyzing the behavior of QUIC, BBRv3, and FEC in real networks.
Not a simulator, but a real engineering instrument — all our research is based on measurements collected with it.

Who quic-test is for

Network engineers — compare TCP/QUIC/BBRv3 in your own networks and understand where QUIC provides real benefits.

SRE and DevOps teams — test service behavior under packet loss and high RTT, prepare for production issues before they appear.

Educators and students — run modern labs on transport protocols with real metrics and visualization.

Researchers — gather datasets for ML routing models, publish reproducible results.

Key capabilities

Protocols: QUIC (RFC 9000), HTTP/3, 0-RTT resumption.

Congestion Control: BBRv2 (stable), BBRv3 (experimental), CUBIC, Reno.

Forward Error Correction: XOR-FEC (working), RS-FEC (in development).

Network profiles: mobile (4G/LTE), Wi-Fi, lossy (3–5% loss), high-latency (regional routes).
All profiles are based on real measurements from our CloudBridge Edge PoPs.

Metrics: Prometheus, Grafana, TUI visualization (quic-bottom in Rust).

Comparison: TCP vs QUIC under identical conditions.


Quick Start

Docker

The simplest way to start is with ready Docker images:

# Run client (performance test)
docker run mlanies/quic-test:latest --mode=client --server=demo.quic.tech:4433

# Run server
docker run -p 4433:4433/udp mlanies/quic-test:latest --mode=server
Enter fullscreen mode Exit fullscreen mode

Build from source

git clone https://github.com/twogc/quic-test
cd quic-test

# Build FEC library (requires clang)
cd internal/fec && make && cd ../..

# Build main tool (requires Go 1.21+)
go build -o quic-test cmd/quic-test/main.go

# Run
./quic-test --mode=client --server=demo.quic.tech:4433
Enter fullscreen mode Exit fullscreen mode

First test: QUIC vs TCP

Server:

./quic-test --mode=server --listen=:4433
Enter fullscreen mode Exit fullscreen mode

Client:

./quic-test --mode=client --server=127.0.0.1:4433 --duration=30s
Enter fullscreen mode Exit fullscreen mode

QUIC vs TCP comparison:

./quic-test --mode=client --server=127.0.0.1:4433 --compare-tcp --duration=30s
Enter fullscreen mode Exit fullscreen mode

TUI visualization:

quic-bottom --server=127.0.0.1:4433
Enter fullscreen mode Exit fullscreen mode

The results include RTT, jitter, throughput, retransmissions, packet loss, FEC recovery, and fairness under parallel tests.
These are the exact metrics we used when achieving jitter <1 ms on PoP↔PoP.


Real network profiles

Profiles are based on actual measurements from CloudBridge Edge PoPs (Moscow, Frankfurt, Amsterdam).

Mobile (4G/LTE)

RTT 50–150 ms (avg ~80), throughput 5–50 Mbps (avg ~20), 0.1–2% loss, jitter 10–30 ms.
On this profile we tested FEC and achieved ~+10% goodput at 5% loss.

Wi-Fi

Burst losses and micro-drop behavior typical for office/home Wi-Fi.

Lossy

3–5% stable loss — ideal for testing FEC recovery efficiency.

High-latency

RTT 50–150 ms — typical interregional RU↔EU routes that we tested.

Usage:

./quic-test --mode=client --profile=mobile --compare-tcp --duration=60s
Enter fullscreen mode Exit fullscreen mode

Custom profiles

./quic-test --mode=client --profile=custom \
  --rtt=100ms \
  --bandwidth=10mbps \
  --loss=1% \
  --jitter=20ms
Enter fullscreen mode Exit fullscreen mode

Metrics & Grafana integration

Server with Prometheus:

./quic-test --mode=server --prometheus-port=9090
Enter fullscreen mode Exit fullscreen mode

Available metrics

  • quic_rtt_ms
  • quic_jitter_ms
  • quic_loss_total
  • fec_recovered
  • tcp_goodput_mbps
  • quic_goodput_mbps
  • bbrv3_bandwidth_est
  • quic_datagram_rate
  • connection_drops
  • queue_delay_ms

We use these metrics in Grafana and in AI Routing Lab.


Research examples

Case 1: Mobile profile (5% loss)

Metric Baseline QUIC QUIC + FEC 10% Gain
Goodput 3.628 Mbps 3.991 Mbps +10%
Jitter 0.72 ms 0.72 ms stable
RTT P50 51.25 ms 51.25 ms stable

TCP CUBIC shows 4–6× degradation here.

Case 2: VPN tunnels with 10% loss

Metric TCP QUIC QUIC + FEC 15% Gain
Throughput 25 Mbps 45 Mbps 68 Mbps +172%
Retransmissions 18,500 12,200 3,800 -79%
P99 RTT 450 ms 320 ms 210 ms -53%

Other results

  • PoP↔PoP (Moscow—Frankfurt—Amsterdam): jitter <1 ms, connection time 9.20 ms
  • BBRv2 vs BBRv3 on satellite-like profiles: +16% throughput
  • Production profile: 9.258 Mbps goodput at 30 connections

More in docs/reports/.


Usage in universities

quic-test was originally designed as a teaching laboratory environment.

Available lab works

Lab #1: QUIC basics — RTT, jitter, handshake, 0-RTT, connection migration.
Lab #2: TCP vs QUIC — losses, HOL blocking, performance.
Lab #3: Losses & FEC — redundancy trade-offs.
Lab #4: BBRv3 vs CUBIC — congestion control comparison.
Lab #5: NAT traversal — ICE/STUN/TURN.
Lab #6: HTTP/3 performance — multiplexing vs HOL-blocked HTTP/2.

Materials are available in docs/labs/.


Architecture

Detailed scheme in docs/ARCHITECTURE.md. Summary:

Go core — transport, QUIC, measurements (quic-go v0.40, BBRv2 and experimental BBRv3).
Rust TUIquic-bottom real-time visualization.
C++ FEC module — AVX2 SIMD optimized, stable XOR-FEC, experimental RS-FEC.
Metrics — Prometheus, HDR histograms.
Network emulation — token bucket, delay queue, random drop.


Project status

Stable — QUIC client/server, TCP vs QUIC comparison, profiles, Prometheus, TUI.
Experimental — BBRv3, RS-FEC, MASQUE CONNECT-IP, TCP-over-QUIC.
Planned — automatic plotting, eBPF latency inspector, mini-PoP container.


Why we opened the project

CloudBridge Research is an independent research center (ANO "Center for Network Technology Research and Development", founded in 2025).
Our goal is to create an open stack of tools for engineers and universities.
We believe that open research accelerates technological progress and makes it accessible to everyone.


Related projects

AI Routing Lab — uses quic-test metrics to train delay prediction models (>92% accuracy target).
masque-vpn — QUIC/MASQUE VPN load-tested with quic-test, including high-loss scenarios.


How to reproduce our results

All configs and commands are in the repo.

Production profile (0.1% loss, 20 ms RTT):

./quic-test --mode=server --listen=:4433 --prometheus-port=9090

./quic-test --mode=client \
  --server=<server-ip>:4433 \
  --connections=30 \
  --duration=60s \
  --congestion=bbrv3 \
  --profile=custom \
  --rtt=20ms \
  --loss=0.1%
Enter fullscreen mode Exit fullscreen mode

Mobile profile (5% loss) with FEC:

./quic-test --mode=server \
  --listen=:4433 \
  --fec=true \
  --fec-redundancy=0.10

./quic-test --mode=client \
  --server=<server-ip>:4433 \
  --profile=mobile \
  --fec=true \
  --fec-redundancy=0.10 \
  --duration=60s
Enter fullscreen mode Exit fullscreen mode

Other scenarios are described in scripts/ and docs/reports/.


Contributions & feedback

We welcome issues, PRs, test reports, feature proposals, and integrations into university courses.

GitHub: https://github.com/cloudbridge-research/quic-test
Email: info@cloudbridge-research.ru
Blog: https://cloudbridge-research.ru


Conclusion

If you need to honestly evaluate how QUIC, BBRv3, and FEC behave in real networks — from Wi-Fi to mobile to regional backbones — try quic-test.

All results are reproducible, all tools are open, and this is a living engineering project that evolves with the community.

Try it, reproduce our findings, share your results — together we make networks better.

Top comments (0)