DEV Community

Benny
Benny

Posted on

Fiber: Built on fasthttp, But 28x Slower at Pipelining — What Happened? (HttpArena Deep Dive)

Fiber is one of the most popular Go web frameworks on GitHub. 34K+ stars. Express-inspired API. And it's built on top of fasthttp — the same engine that crushes most benchmarks.

So you'd expect Fiber to be fast, right? Maybe not quite as fast as raw fasthttp, but close?

I dug into HttpArena's benchmark data to find out. The results surprised me.

The Quick Summary

Fiber is the most memory-efficient Go framework in almost every test. It's also last place among Go frameworks in most throughput tests. And in pipelining, it's not just slower than fasthttp — it's 28x slower.

But the story is more nuanced than "Fiber is slow." Let's dig in.

Baseline Performance: Last Among Go Peers

In the standard baseline test at 4,096 connections:

Framework Requests/sec Memory Avg Latency
go-fasthttp 1,464,168 188 MB 2.79ms
gin 430,086 375 MB 9.45ms
echo 424,337 249 MB 9.54ms
chi 422,523 359 MB 9.62ms
fiber 397,172 144 MB 6.47ms

Fiber comes in 5th out of 5 Go frameworks for raw throughput, and #39 out of 51 frameworks overall. That's... not what you'd expect from something built on fasthttp.

But look at that memory column. 144 MB. That's the lowest of any Go framework by a wide margin — 42% less than echo, and 62% less than gin. And the latency is actually better than gin/echo/chi despite lower throughput.

The Pipelining Gap: 28x

This is where things get wild. HTTP pipelining at 4,096 connections with 16 requests per pipeline:

Framework Requests/sec Memory
go-fasthttp 17,808,031 196 MB
gin 1,046,933 1,003 MB
echo 1,016,858 492 MB
chi 937,099 692 MB
fiber 623,248 96 MB

go-fasthttp does 17.8 million requests per second. Fiber does 623K. That's a 28.6x gap.

Even gin manages 1M rps in pipelining — 68% more than Fiber. And again, look at gin's memory usage: over 1 GB. Fiber? 96 MB. Sipping resources.

Why Such a Massive Gap?

I read both implementations. The difference is architectural.

go-fasthttp in HttpArena uses SO_REUSEPORT — it spawns one listener per CPU core, each with its own fasthttp.Server. Incoming connections get distributed by the kernel. The routing is a raw switch statement on ctx.Path(). Zero middleware, zero overhead, zero allocations on the hot path.

// go-fasthttp: one listener per CPU core
for i := 0; i < numCPU; i++ {
    go func() {
        ln, _ := reuseport.Listen("tcp4", ":8080")
        s := &fasthttp.Server{Handler: handler}
        s.Serve(ln)
    }()
}
Enter fullscreen mode Exit fullscreen mode

Fiber runs a single app.Listen(":8080") with its Express-style router, middleware chain, and compress.New() applied globally. Every request walks through middleware functions. The router does pattern matching instead of a switch statement.

// fiber: single listener with middleware chain
app := fiber.New(fiber.Config{...})
app.Use(compress.New(compress.Config{Level: compress.LevelBestSpeed}))
app.Get("/pipeline", handler)
app.Listen(":8080")
Enter fullscreen mode Exit fullscreen mode

That compression middleware is applied globally — even on the /pipeline endpoint that returns a 2-byte "ok" response. Every baseline request pays the cost of checking Accept-Encoding headers for no reason.

This is the cost of ergonomics. Fiber gives you Express-style middleware, clean routing, and a nice API. That costs CPU cycles.

Where Fiber Actually Wins

Here's the twist: there are two categories where Fiber outperforms its Go peers.

Limited Connections (512)

When connections are scarce and there's keep-alive and reconnection churn:

Framework Requests/sec Memory
fiber 178,746 68 MB
gin 149,330 94 MB
go-fasthttp 147,847 100 MB
chi 144,893 94 MB
echo 136,646 93 MB

Fiber is #1 among Go frameworks here, beating even raw fasthttp by 21%. Under connection churn at lower concurrency, Fiber's lightweight connection handling shines. Fasthttp's multi-listener approach actually hurts here — distributing 512 connections across many listeners means some sit idle while others are busy.

Mixed Workload

The mixed workload test hits all endpoints (baseline, JSON, DB, uploads, compression, static files) simultaneously at 4,096 connections:

Framework Requests/sec Memory
go-fasthttp 71,173 79.9 GB
fiber 58,490 761 MB
echo 36,125 1.7 GB
chi 34,365 595 MB
gin 32,477 988 MB

Fiber is solidly #2, beating echo/chi/gin by 60-80%. And look at that memory story: go-fasthttp uses 79.9 GB of RAM to achieve 71K rps. Fiber uses 761 MB for 58.5K rps. That's 105x less memory for only 18% less throughput.

Per-megabyte efficiency, Fiber is the clear winner in mixed workloads.

Compression: The Hidden Strength

Fiber's global compression middleware — the same thing that hurts pipelining — actually pays off here:

Framework Requests/sec Memory
go-fasthttp 14,771 14.4 GB
fiber 9,483 5.9 GB
chi 7,602 3.4 GB
gin 7,578 2.9 GB
echo 7,536 3.1 GB

Second place among Go frameworks. Fiber uses andybalholm/brotli and klauspost/compress through its middleware — solid libraries. The 25% lead over gin/echo/chi is real.

JSON Serialization: The Weak Spot

JSON processing at 4,096 connections:

Framework Requests/sec Memory
go-fasthttp 314,945 696 MB
gin 174,851 433 MB
echo 164,227 371 MB
chi 158,040 390 MB
fiber 125,297 171 MB

Last place again. 28% slower than gin. The implementation is interesting — Fiber's handler allocates a new []ProcessedItem slice on every request, processes the dataset, then marshals to JSON with json.Marshal. The other net/http-based frameworks do essentially the same thing, but they have more CPU available since they're not running through Fiber's middleware stack.

Database Performance: Quietly Strong

Async database queries via PostgreSQL at 1,024 connections:

Framework Requests/sec Memory
go-fasthttp 30,784 359 MB
fiber 19,196 192 MB
gin 17,660 220 MB
echo 17,486 284 MB
chi 17,324 211 MB

Clear #2 among Go frameworks, and 9-11% ahead of the gin/echo/chi cluster. Both Fiber and go-fasthttp use pgxpool with NumCPU * 4 max connections. The gap between them is mostly the overhead of Fiber's framework layer, but when the bottleneck shifts to the database, that overhead matters less.

The Memory Story

Let's talk about what Fiber does really well. Across every single test, Fiber uses less memory than any other Go framework:

  • Baseline 4K: 144 MB (vs gin's 375 MB)
  • Pipelined 4K: 96 MB (vs gin's 1 GB)
  • JSON 4K: 171 MB (vs gin's 433 MB)
  • Mixed 4K: 761 MB (vs go-fasthttp's 79.9 GB)
  • Uploads 256: 296 MB (vs echo's 541 MB)
  • Limited Conn: 68 MB (vs go-fasthttp's 100 MB)

This is fasthttp's zero-allocation philosophy showing through. Even with Fiber's middleware layer on top, the underlying engine reuses buffers aggressively and avoids heap allocations. In constrained environments — containers, edge deployments, shared hosting — this matters more than raw throughput.

Uploads: Beating fasthttp

Here's a fun one. File uploads at 256 connections:

Framework Requests/sec Memory
echo 1,334 541 MB
chi 1,326 509 MB
gin 1,320 582 MB
fiber 1,222 296 MB
go-fasthttp 910 15.5 GB

Fiber is 4th, but go-fasthttp is last — and using 15.5 GB of RAM to process uploads at only 910 rps. Fiber handles uploads with StreamRequestBody: true and c.Request().BodyStream(), which streams the body to /dev/null efficiently. The net/http-based frameworks (gin, echo, chi) do slightly better here, but fasthttp's approach of reading the entire body into memory is catastrophic for large file uploads.

Architecture Deep Dive

Looking at Fiber's HttpArena implementation:

The good:

  • StreamRequestBody: true — avoids buffering entire request bodies
  • BodyLimit: 25 * 1024 * 1024 — explicit limits prevent OOM
  • Pre-computed JSON for the compression endpoint (jsonLargeResponse)
  • Static files loaded into memory at startup (staticFiles map)
  • Using pgxpool for async DB connections, modernc.org/sqlite for sync DB

The concerning:

  • Global compression middleware penalizes all endpoints
  • Single app.Listen() vs go-fasthttp's per-core listeners with SO_REUSEPORT
  • JSON endpoint allocates new slices per request (could pre-compute like the compression endpoint)
  • Manual json.Marshal instead of writing directly to the response writer

What could be improved:

  • Apply compression only to the /compression endpoint
  • Use Fiber's Prefork mode (built-in!) to match go-fasthttp's multi-listener approach
  • Pre-compute the JSON response like the compression response
  • Use sonic or go-json instead of encoding/json

Fiber actually has a Prefork: true config option that does SO_REUSEPORT under the hood. The benchmark implementation doesn't use it. That alone could close a significant chunk of the gap with raw fasthttp.

The Verdict: Who Should Use Fiber?

Fiber is perfect for you if:

  • You want Express-like ergonomics in Go
  • Memory efficiency matters (containers, K8s with resource limits)
  • You're building APIs that do real work (DB queries, mixed workloads) rather than pure I/O benchmarks
  • You want a single framework that handles compression, routing, and middleware cleanly

Consider raw fasthttp if:

  • You need maximum pipelining throughput (17.8M vs 623K rps is hard to ignore)
  • You're building a proxy or gateway where every microsecond counts
  • You don't mind manual routing and zero framework niceties

Consider gin/echo if:

  • You want net/http compatibility and the broader Go ecosystem
  • Upload performance matters more than memory efficiency
  • You're more comfortable with net/http patterns

Fiber occupies an interesting niche: it's the most memory-efficient Go framework while being competitive in real-world mixed workloads. It's not the throughput king, and it probably shouldn't be — that's not what frameworks are for. Frameworks trade raw speed for developer experience. Fiber makes that trade while keeping memory usage remarkably low.

The 28x pipelining gap is eye-catching, but pipelining is a synthetic benchmark that few production workloads actually use. In mixed workloads — which better simulate real APIs — Fiber beats gin by 80% while using less RAM.

That's a framework doing its job well.


All data from HttpArena (GitHub). Test environment: 64 threads, various connection counts. Check the site for the full leaderboards and methodology.

Top comments (0)