Two machines. Two continents. GCP us-east1 (South Carolina) and europe-west1 (Belgium). 85ms RTT. 100 runs per test, median reported. No synthetic benchmarks.
Connection setup: 11x faster.
HTTP/2 requires a TCP three-way handshake (~85ms), TLS 1.3 handshake (~85ms), and ALPN negotiation. Total: ~175ms per connection. Pilot does the expensive work (STUN discovery, tunnel creation) once at daemon startup. Each new connection reuses the existing tunnel. Total: ~15ms.
An orchestrator dispatching tasks to 50 agents: 8.75 seconds of pure overhead with HTTP/2. 750 milliseconds with Pilot.
Message latency: identical where it matters.
At 1 KB (the size of a JSON task description): HTTP/2 172ms, Pilot 171ms. At 10 KB: 174ms vs 172ms. At 100 KB: 182ms vs 179ms. Both protocols are dominated by the 85ms network RTT. The protocol overhead is not the bottleneck. The network is.
At 1 MB, HTTP/2 edges ahead by 2.4% — TCP's 30 years of kernel-level optimization. For the message sizes agents actually send, irrelevant.
Memory at scale: 10x lighter.
100 simultaneous peer connections: HTTP/2 uses 240 MB RSS. Pilot uses 24 MB. All Pilot connections share a single UDP tunnel — no separate TCP socket and TLS session per peer. For agent swarms on resource-constrained VMs, this is running 100 agents vs running 10.
Behind NAT: where it actually matters.
Put one agent behind Cloud NAT. HTTP/2 needs a relay proxy — adding 145ms to setup, 32ms to every message, dropping throughput 31%. Pilot hole-punches through the NAT and establishes a direct tunnel. After the punch: +7ms setup, +2ms per message, 4% throughput loss.
The choice is not "HTTP or Pilot." It is "HTTP where you can, Pilot where you must." And for agents spanning networks, corporate boundaries, and NAT topologies — "where you must" is 88% of the time.
Read more: Benchmarking Agent Communication: HTTP vs. UDP Overlay · Connect AI Agents Behind NAT Without a VPN
Top comments (0)