DEV Community

Benny
Benny

Posted on

Drogon: The C++ Framework That Tops HTTP/2 Benchmarks (And Where It Struggles)

I've been digging through HttpArena benchmark data lately — it's an open benchmark suite that tests HTTP frameworks across a bunch of realistic scenarios — and Drogon caught my eye. It's quietly one of the most interesting performers in the entire dataset.

Let me walk you through what I found.

What is Drogon?

Drogon is a C++ web framework built on top of its own async networking library (Trantor). It's been around since 2018, and it's designed for high-performance HTTP services. Think of it as what you'd reach for if you need raw C++ speed but don't want to hand-roll everything from scratch.

The HttpArena implementation uses Drogon v1.9.10, compiled with -O3 -flto (link-time optimization), running on Ubuntu 24.04. C++17, CMake build, nothing exotic.

The Numbers

Baseline (Plain Text Response)

In the standard baseline test, Drogon lands #7 out of 30 frameworks across all connection levels:

Connections RPS Avg Latency P99 Latency Memory
512 1,928,561 264μs 1.61ms 81.7 MiB
4,096 2,249,513 1.82ms 9.67ms 129.4 MiB
16,384 2,087,751 7.57ms 42.80ms 314.6 MiB

For context, the top 10 at 4,096 connections looks like this:

  1. ringzero (C) — 3,452,370
  2. h2o (C) — 3,162,875
  3. blitz (Zig) — 3,071,375
  4. nginx (C) — 3,028,812
  5. hyper (Rust) — 2,942,685
  6. actix (Rust) — 2,711,945
  7. drogon (C++) — 2,249,513
  8. kemal (Crystal) — 2,154,014
  9. quarkus-jvm (Java) — 2,102,344
  10. bun (TS) — 1,956,298

Solid company. Drogon is the only C++ framework in the benchmark and it's hanging with the Rust and C heavyweights.

HTTP/2 — Where Drogon Shines ✨

Here's where things get really interesting. In the HTTP/2 baseline tests:

Connections RPS Rank Memory
64 10,631,440 #1/14 🏆 98.6 MiB
256 6,725,340 #3/16 155.7 MiB
1,024 6,859,540 #3/16 357.0 MiB

Drogon takes first place in the HTTP/2 baseline with 64 connections, pushing over 10.6 million requests per second. At that concurrency level, it beats hyper (6.88M), h2o (which dominates at higher concurrency), and everything else. It's also serving at 1.48 GB/s bandwidth while using under 100 MiB of memory.

Even at higher concurrency where h2o takes the lead (14M+ RPS), Drogon stays comfortably in the top 3.

Static File Serving over HTTP/2

Drogon's HTTP/2 dominance extends to static files too:

Connections RPS Rank Bandwidth
64 1,813,238 #1/14 🏆 27.77 GB/s
256 1,546,328 #2/16 23.66 GB/s
1,024 1,018,221 #5/16 15.57 GB/s

Another first-place finish at 64 connections, beating actix by a significant margin (1.81M vs 1.35M). The bandwidth numbers are massive — nearly 28 GB/s of static content over HTTP/2.

Pipelined Requests

Pipelining shows Drogon's solid but not spectacular HTTP/1.1 parsing:

Connections RPS Rank
512 7,828,214 #9/30
4,096 7,612,822 #9/30
16,384 7,260,243 #9/30

Consistently 9th place across all concurrency levels. The gap to the top is real though — ringzero hits 47M RPS pipelined, roughly 6x what Drogon manages. But 7.8M pipelined RPS is nothing to sneeze at for a full-featured framework.

JSON Serialization — The Plot Twist

Okay, this is where the story gets complicated. In the JSON test (serialize a 50-item dataset):

Connections RPS Rank
4,096 128,946 #26/29 😬
16,384 124,793 #26/29

That's... not great. Drogon drops to near the bottom of the pack for JSON serialization. For context, nginx (using its native JSON module) hits 1.18M RPS in the same test. Even Flask manages 107K — Drogon is barely ahead of it.

But here's the twist. At 32,768 connections, Drogon jumps to #3 out of 14 with 933,156 RPS. The framework seems to have a specific performance cliff at moderate connection counts for JSON workloads, then recovers dramatically at very high concurrency.

Looking at the implementation, the likely culprit is jsoncpp. Drogon uses jsoncpp for JSON serialization, which is known to be one of the slower JSON libraries in C++. The code builds each JSON response by constructing Json::Value objects field by field, then serializes with Json::StreamWriterBuilder. At lower concurrency where the CPU isn't fully utilized across all event loop threads, the per-request serialization overhead dominates.

Compression

This is Drogon's worst showing:

Connections RPS Rank
4,096 4,348 #24/28
16,384 4,173 #23/28

Only 4K RPS with gzip compression enabled. The memory usage spikes to 556 MiB and CPU pegs at 12,153%. Drogon uses zlib for compression, and compressing the large JSON response on every request absolutely tanks throughput. The top performer here (blitz) manages 89K RPS — over 20x more.

Mixed Workload

The mixed test hits multiple endpoints in a realistic traffic pattern:

Connections RPS Rank
512 21,593 #16/17
4,096 22,858 #20/27
16,384 22,100 #20/27

Bottom half of the field. The mixed workload combines plain text, JSON, compression, static files, and database queries — and the JSON/compression weakness drags the composite score down significantly. go-fasthttp leads here with 87K RPS at 4,096 connections.

Limited Connections & Noisy Neighbor

Drogon recovers nicely in constrained scenarios:

Limited connections (512): #6/30 with 1,251,259 RPS
Limited connections (4096): #4/30 with 1,646,234 RPS
Noisy neighbor (4096): #6/30 with 1,965,305 RPS

When the playing field is leveled by connection limits or background noise, Drogon's efficient event loop and low per-connection overhead keep it competitive.

Architecture Deep Dive

Looking at the HttpArena implementation, a few things stand out:

Thread-Local SQLite

static thread_local sqlite3 *tl_db = nullptr;
static thread_local sqlite3_stmt *tl_stmt = nullptr;
Enter fullscreen mode Exit fullscreen mode

Each event loop thread gets its own SQLite connection with a pre-prepared statement. No mutex contention, no connection pooling overhead. The PRAGMA mmap_size=268435456 enables memory-mapped I/O for the database file. Clean approach.

Pre-loaded Everything

Datasets and static files are loaded entirely into memory at startup:

static std::vector<DataItem> dataset;
static std::string json_large_response;
static std::unordered_map<std::string, StaticFile> static_files;
Enter fullscreen mode Exit fullscreen mode

The large JSON response is pre-serialized once and served as a raw string. Static files sit in an unordered_map for O(1) lookups. This is why the static file serving numbers are so good — there's zero disk I/O.

Async Callback Pattern

Drogon uses the classic async callback style:

void pipeline(const HttpRequestPtr &req,
              std::function<void(const HttpResponsePtr &)> &&callback)
{
    auto resp = HttpResponse::newHttpResponse();
    resp->setBody("ok");
    resp->setContentTypeCode(CT_TEXT_PLAIN);
    callback(resp);
}
Enter fullscreen mode Exit fullscreen mode

No coroutines, no futures — just raw callbacks. This keeps the overhead minimal but makes complex async chains harder to write. Drogon does support coroutines in newer versions, but this benchmark sticks with callbacks.

Build Optimization

The Dockerfile shows Drogon built from source with LTO enabled, and the app compiled with -O3 -flto. Notably, drogon itself is built with -DBUILD_ORM=OFF -DBUILD_BROTLI=OFF — stripping out unused features for a leaner binary.

Who Should Use Drogon?

Good fit if you:

  • Already have a C++ codebase and need HTTP endpoints
  • Need excellent HTTP/2 performance (seriously, those numbers are elite)
  • Want a mature, feature-complete framework (ORM, WebSocket, middleware, etc.)
  • Need low memory usage under moderate load (~80-130 MiB)
  • Serve mostly static or pre-computed content

Maybe look elsewhere if you:

  • Need fast JSON serialization (consider Rust/actix or use a faster JSON lib)
  • Need strong gzip compression throughput
  • Prefer modern async patterns over callbacks
  • Want a large ecosystem and community (Drogon's is growing but still niche)

The Verdict

Drogon is a framework of extremes. Its HTTP/2 performance is genuinely best-in-class — taking #1 in both baseline and static file tests at moderate concurrency. The plain HTTP/1.1 baseline numbers are consistently top-10 across all concurrency levels. Memory efficiency is excellent.

But the JSON serialization bottleneck is real and dramatic. Dropping from top-7 in baseline to #26 in JSON tests is a stark reminder that framework performance isn't one-dimensional. The jsoncpp dependency is the obvious weak link — swapping it for simgleson, rapidjson, or even nlohmann/json could dramatically change those numbers.

If you're building an HTTP/2 service that mostly serves pre-computed or static content, Drogon might be the fastest option available. If you're building a JSON API that serializes data on every request... you might want to benchmark carefully first.


All benchmark data from HttpArena (GitHub). Tests run under controlled conditions with consistent hardware across all frameworks.

Top comments (0)