DEV Community

Benny
Benny

Posted on

Actix-web: #1 in 15 Out of 22 Tests — Dissecting the Benchmark King (HttpArena Deep Dive)

There's a framework that keeps showing up at the top of benchmark charts, and it's not written in C.

Actix-web, Rust's battle-tested async web framework, just put up numbers in HttpArena that are genuinely hard to argue with. We're talking #1 overall in 15 out of the 22 test profiles it competed in, across 47 frameworks. Not #1 among Rust frameworks. #1 overall.

Let's dig into what's going on.

What is Actix-web?

Actix-web is a Rust web framework built on top of the Tokio async runtime. It's been around since ~2017, making it one of the more mature options in the Rust ecosystem. Version 4 (the one tested here) dropped the actor model dependency that gave it its name — now it's just a really fast, really ergonomic async web framework.

It uses rustls for TLS (no OpenSSL dependency), compiles with thin LTO and -O3, and targets native CPU instructions. The HttpArena implementation runs one worker per CPU core with a backlog of 4096.

The Headline Numbers

Let's start with where actix absolutely dominates.

Baseline (Plain HTTP/1.1)

At 4,096 connections, actix hits 2.61M requests/sec with 1.57ms average latency and only 158MB of memory. For context, that puts it:

  • #6 overall out of 47 frameworks (behind ringzero, h2o, nginx, blitz, and hyper)
  • #2 among Rust frameworks (hyper edges it out at 2.76M rps)
  • Ahead of bun (1.56M), drogon (1.69M), and every Go framework

But here's the thing — at 512 connections, actix climbs to 2.49M rps with a tiny 205μs average latency and 93MB RAM. The consistency across connection counts is impressive.

Pipelined Requests — Where Actix Gets Scary

This is where things get wild. With HTTP pipelining (16 requests per connection), actix hits:

Connections Requests/sec Latency Memory
512 20.4M 400μs 123MB
4,096 23.0M 2.84ms 220MB
16,384 21.4M 10.7ms 689MB

That's 23 million requests per second at peak. #3 overall, behind only ringzero (46.8M, written in C) and blitz (39.5M, written in Zig). Actix beats hyper (16.3M), go-fasthttp (17.8M), and the entire JVM ecosystem in this test.

For a framework that gives you routing, middleware, and a full request/response abstraction — doing 23M rps in pipelining is absurd.

JSON Serialization — The Practical Test

The JSON test serializes a dataset, computes derived fields, and sends it back. This is closer to what a real API does.

At 4,096 connections: 1.13M rps, pushing 8.92 GB/s of bandwidth. That's #3 overall, right behind hyper (1.17M) and nginx (1.14M). Actix is neck-and-neck with its own underlying HTTP library here.

Interesting detail: actix uses serde_json (Rust's standard JSON library) — no exotic SIMD JSON tricks. And it still hangs with nginx, which uses a highly optimized C JSON implementation. Rust's zero-cost abstractions are doing real work here.

Mixed Workload — The Real World Simulation

The mixed test combines baseline requests, JSON serialization, database queries, file uploads, and compression — all hitting the server simultaneously. This is the closest thing to a production workload in HttpArena.

At 4,096 connections:

  • #2 overall: 75,948 rps (52ms avg latency, 2.1GB RAM)
  • Behind only go-fasthttp at 87,964 rps (but fasthttp uses 10.2GB RAM — 5x more memory)
  • Ahead of salvo (73.5K), bun (70.7K), and ultimate-express (63K)

At 16,384 connections, actix takes #1: 157,549 rps. Go-fasthttp can't keep up at this connection count.

The memory efficiency here is the real story. Actix handles a brutal mixed workload with 2.1GB while go-fasthttp needs 10.2GB and bun needs 5.4GB.

HTTP/2 Baseline

Actix uses rustls for HTTP/2. At 256 connections: 3.05M rps, ranking #8 out of 21 HTTP/2-capable frameworks. h2o (C) leads at 14.1M, and hyper takes #2 at 8.15M.

This is one of actix's weaker areas relatively speaking — the rustls + actix HTTP/2 implementation doesn't match h2o's purpose-built HTTP/2 stack. But 3M rps for HTTP/2 is still excellent in absolute terms.

Noisy Neighbor — Handling Bad Traffic

The noisy test throws malformed requests, connection resets, and garbage traffic at the server alongside legitimate requests. It's a resilience test.

Actix handles it beautifully: 2.43M rps at 4,096 connections (#5 overall), correctly returning 4xx for bad requests while maintaining throughput. Only the C trio (ringzero, h2o, nginx) and hyper beat it.

Zero 5xx errors. Zero crashes. That's Rust's memory safety paying dividends — no segfaults from malformed input, no buffer overflows from garbage data.

Limited Connections — Efficiency Under Constraints

With connection reuse disabled (every request opens a new TCP connection), actix hits 1.07M rps at 512 connections, ranking #8 overall. The connection setup overhead is real, but actix handles it gracefully with only 128MB of memory.

Where Actix Struggles

No framework is perfect, and actix has clear weak spots.

Compression — Room to Grow

At 4,096 connections: 14,220 rps, #8 overall. Not bad, but blitz (89K rps) is 6x faster, and even deno (17.7K) and bun (15.8K) outpace it.

The culprit is likely the compression middleware implementation. Actix uses flate2 through its compress-gzip feature — solid but not cutting-edge. The 5.7GB memory usage at 4K connections also suggests the compression pipeline could be more efficient.

Uploads — The Weak Spot

File uploads reveal actix's biggest weakness. At 256 connections: 616 rps, #15 overall. At 512 connections: 559 rps, #16. Spring JVM leads at 1,265 rps — more than double.

The upload handler in the HttpArena implementation is simple (web::Bytes → count length → respond), so this isn't a code issue. Actix's body parsing pipeline likely has overhead for large payloads that frameworks like Spring and nginx handle more efficiently.

H2 Static Files at Scale

At 1,024 HTTP/2 connections serving static files: 946K rps, #6. Nginx (1.80M) and hyper (1.66M) are significantly faster. At lower connection counts actix does better (#2 at 64 connections with 1.35M rps), but it doesn't scale as well under HTTP/2 pressure.

The Rust Showdown

How does actix stack up against its Rust siblings?

Test hyper actix salvo rocket
Baseline 4K 2.76M 2.61M 1.26M 86K
Pipelined 4K 16.3M 23.0M 3.3M 176K
JSON 4K 1.17M 1.13M 781K 44K
Mixed 4K 75.9K 73.5K 34.7K
Compression 4K 14.2K 15.3K 10.1K

The pattern is clear:

  • hyper wins raw throughput (it's the HTTP library actix is compared against, not built on — actix has its own HTTP implementation)
  • actix wins pipelining and mixed workloads by a huge margin
  • salvo is competitive in practical tests and wins compression
  • rocket is... in a different league (trading performance for developer ergonomics)

Actix vs hyper is the most interesting comparison. Hyper is a lower-level HTTP library — less abstraction, less overhead. The fact that actix, with its routing, middleware stack, and request extraction pipeline, comes within 5-10% of hyper in most tests is remarkable. And in pipelining, actix actually crushes hyper by 41%.

Reading the Implementation

Looking at the actual HttpArena implementation, a few things stand out:

Smart caching: The JSON large dataset is pre-serialized at startup (build_json_cache) and served as raw bytes for the compression test. No re-serialization per request.

Per-worker database connections: Each actix worker gets its own SQLite connection with PRAGMA mmap_size=268435456 (256MB memory-mapped I/O). No connection pooling overhead, no cross-thread synchronization on DB access.

Static header values: The SERVER header is a static HeaderValue — allocated once, cloned cheaply. Small thing, but at 23M rps, small things matter.

Compile-time optimization: codegen-units = 1 + thin LTO + target-cpu=native + panic=abort. This squeezes every last drop out of the compiler. The Dockerfile even uses RUSTFLAGS="-C target-cpu=native" for native SIMD instructions.

Middleware approach: Compression uses actix_web::middleware::Compress::default() — it's applied globally, so the compression endpoint benefits from the framework's built-in gzip handling rather than manual compression.

Who Should Use Actix-web?

If you're building a Rust web service and care about performance, actix-web is the obvious choice. The numbers speak for themselves, but more importantly:

  • It's mature: Version 4 has been stable for years. The ecosystem (middleware, extractors, websockets) is deep.
  • It's ergonomic: Compared to hyper (which requires you to handle everything manually), actix gives you routing, middleware, typed extractors, and a clean API.
  • Memory efficiency: Consistently low memory usage across all tests. When go-fasthttp needs 10GB for a mixed workload, actix does it in 2GB.
  • Battle-tested: Powers production systems at scale. Microsoft, for example, uses actix-web internally.

The main trade-off is Rust itself — compile times, borrow checker learning curve, and a smaller hiring pool. But if you've already committed to Rust, actix-web should be your default choice for web APIs.

The Bottom Line

Actix-web is the most complete high-performance web framework in the HttpArena benchmark. It doesn't always take #1 (the C frameworks and hyper beat it in raw throughput), but no other framework combines:

  • Top-6 baseline performance
  • Top-3 pipelined throughput (23M rps!)
  • Top-3 JSON serialization
  • Top-2 mixed workload handling
  • Excellent memory efficiency
  • Full framework features (routing, middleware, extractors)

The only frameworks that consistently outperform it are either bare HTTP libraries (hyper, h2o) or purpose-built C/Zig systems (ringzero, blitz, nginx) that sacrifice developer ergonomics for raw speed.

For a framework that gives you #[get("/api/items")] syntax and middleware stacks, doing 23 million pipelined requests per second is not normal. Actix makes it look easy.


All benchmarks from HttpArena — an open-source HTTP framework benchmark suite. Full results, methodology, and source code on GitHub.

Got questions about the data or want to see another framework deep dive? Drop a comment!

Top comments (1)

Collapse
 
fafhrd91 profile image
Nikolay Kim

Actix Web hasn’t been the benchmark leader for a while, you should try ntex + neon-uring