Every few months, someone posts "Bun is fast" on Twitter and the replies turn into a warzone. Node fans say it doesn't matter. Deno fans say their runtime is better. Rust folks just post flamegraphs.
So let's look at actual numbers. I ran Bun through HttpArena, an open-source benchmark suite that tests HTTP frameworks across a bunch of real-world-ish scenarios — not just "hello world" in a loop. We're talking baseline throughput, pipelining, JSON serialization, compression, mixed workloads, uploads, noisy neighbor tolerance, and more.
The results are... honestly fascinating. Bun is a study in contrasts.
What is Bun?
If you've been living under a rock: Bun is a JavaScript/TypeScript runtime built from scratch using JavaScriptCore (Safari's engine) instead of V8. It's written in Zig and aims to be a drop-in replacement for Node.js — but faster at everything.
Its built-in HTTP server () skips the Node.js module entirely and goes straight to optimized native code. In the HttpArena benchmark, the implementation uses and spawns one Bun process per CPU core — simple multiprocess scaling with no clustering library needed.
The Headline Numbers
Let me just lay out where Bun landed across the key tests (at 4,096 connections unless noted):
| Test | Rank | RPS | Latency (avg) | Memory |
|---|---|---|---|---|
| Baseline | #13/51 | 1,557,305 | 2.62ms | 2.2 GiB |
| Pipelined | #41/51 | 491,345 | 106.30ms | 2.0 GiB |
| JSON | #9/50 | 708,960 | 4.58ms | 2.9 GiB |
| Compression | #2/49 | 15,804 | 251.28ms | 3.3 GiB |
| Mixed workload | #1/47 | 52,274 | 72.41ms | 6.1 GiB |
| Noisy neighbor | #11/47 | 1,939,652 | 1.25ms | 2.3 GiB |
| Limited conn | #6/51 | 1,388,768 | 2.85ms | 2.4 GiB |
| Upload (256 conn) | #42/48 | 264 | 866.85ms | 10.3 GiB |
| H2 baseline (256) | #18/21 | 378,032 | 72.87ms | 2.2 GiB |
Read that again. #1 in mixed workloads. #2 in compression. But #41 in pipelining and #42 in uploads. That's wild range for a single runtime.
Where Bun Dominates
Mixed Workload: The Overall Champion
The mixed workload test is the closest thing to a "real app" benchmark — it combines baseline requests, JSON serialization, compression, static file serving, and database queries all in one stream. And Bun sits at #1:
Look at that. Bun beats go-fasthttp, which is usually a throughput monster. And it does it with 6.1 GiB of RAM vs go-fasthttp's absurd 80.7 GiB. That's over 13x more memory-efficient.
Three of the top 5 run on the Bun runtime (bun, Elysia, Hono). The Bun ecosystem basically owns this test.
Why? My theory: Bun's built-in gzip (), native JSON handling, and pre-loaded static files all contribute. When you mix these operations together, Bun's "everything is native" approach pays off vs frameworks that rely on separate npm packages for each concern.
Compression: Natively Fast Gzip
Bun's with level 1 compression keeps it competitive with Rust's salvo and ahead of nearly everything else. Deno edges it out here (probably because Deno's compression pipeline is also quite optimized), but check the memory: Bun uses 3.3 GiB vs Deno's 12.8 GiB. Nearly 4x more efficient.
The implementation is elegant too — pre-compute the JSON buffer once, compress per-request:
JSON Serialization: Top 10 Overall
At #9/50 with 708,960 rps, Bun is the fastest JS/TS runtime for JSON workloads (tied with Elysia which runs on Bun). For context:
Bun is ~20% faster than Node and ~23% faster than Fastify at JSON. Not the 10x improvement some marketing suggests, but a solid, consistent edge.
Limited Connections & Noisy Neighbors
Bun handles constrained scenarios well. At limited connections (#6/51, 1,388,768 rps), it beats everything in the JS/TS world by a comfortable margin — Elysia is next at #12. Under noisy neighbor conditions (#11/47, ~1.94M rps), Bun stays stable and leads the JS/TS pack again.
Where Bun Struggles
Pipelining: The Big Miss
This is the elephant in the room. #41 out of 51 frameworks in pipelining at 4,096 connections, with only 491,345 rps and 106ms average latency.
Node.js is nearly 5x faster than Bun at pipelining. Even Express — Express! — only drops to #42. The entire Bun ecosystem (bun, Elysia, Hono) clusters at the bottom.
What's happening? HTTP pipelining sends multiple requests over a single TCP connection without waiting for responses. Bun's likely processes requests one-at-a-time per connection rather than batching pipelined requests. Node's module has years of pipelining optimization baked in.
Is this a dealbreaker? Honestly, not for most real apps. HTTP pipelining is rarely used in production (browsers don't even support it over HTTP/1.1 anymore). But if you're building an internal service-to-service API where clients pipeline aggressively, this matters.
Uploads: Surprisingly Weak
At 256 concurrent connections uploading data, Bun lands at #42/48 with only 264 rps and 867ms latency, using 10.3 GiB of memory:
Node is 3.5x faster at uploads. Express — the framework everyone loves to call slow — handles uploads 3.6x faster than Bun. This suggests Bun's request body reading () has significant overhead for large payloads, or there's a memory management issue when buffering upload data.
HTTP/2: Not There Yet
Bun's H2 support exists but isn't competitive:
Node.js beats Bun by almost 4x in HTTP/2. Even at higher connection counts (1,024), Bun only reaches 558,342 rps. If you're doing H2-heavy work, Node is the better runtime right now.
The "Bun Ecosystem" Effect
One of the coolest things the data reveals: frameworks running on Bun tend to perform very similarly to bare Bun. At 4,096 connections:
| Framework | Baseline RPS | JSON RPS | Mixed RPS |
|---|---|---|---|
| bun (bare) | 1,557,305 | 708,960 | 52,274 |
| Elysia | 1,458,341 | 722,557 | 51,251 |
| Hono (Bun) | 1,242,917 | 662,019 | 49,378 |
The abstraction cost of a framework on top of Bun is remarkably small — maybe 5-20%. Compare that to Node.js where Express is 76% slower than bare Node in baseline. Bun's API is apparently so close to what frameworks need that there's minimal overhead in wrapping it.
Architecture: What Makes It Tick
Looking at the HttpArena implementation, a few things stand out:
Multi-process via reusePort: The entrypoint script spawns one Bun process per CPU core, each calling . The kernel load-balances connections across processes. Simple, effective, no IPC overhead.
Everything pre-loaded: Static files, datasets, and the SQLite database are all loaded at startup. The endpoint re-processes the dataset per request (as the benchmark requires), but the raw data is already in memory.
Bun.gzipSync() over zlib: The compression endpoint uses Bun's native gzip instead of Node's zlib bindings. This is why compression performance is stellar — it's going through Zig's optimized zlib implementation.
Minimal dependencies: Only for PostgreSQL. Everything else — HTTP serving, SQLite, gzip, file reading — uses Bun built-ins. Fewer layers, fewer places for overhead to hide.
Who Should Use Bun?
Based on these numbers, Bun is a strong choice if:
- Your app does a mix of things (JSON, compression, static files, DB queries) — Bun literally wins this category
- You want good JSON throughput without leaving the JS/TS ecosystem
- You care about memory efficiency — Bun consistently uses less RAM than Node/Deno at similar throughput
- You want minimal deps — built-in SQLite, gzip, and HTTP server reduce your node_modules
Think twice if:
- You need HTTP/2 performance — Node is 4x faster today
- You handle lots of uploads — Node/Express handle it much better
- You're building pipelining-heavy internal services — unlikely, but if so, Node or Deno serve this better
The Bottom Line
Bun isn't uniformly faster than everything — no runtime is. But it has a genuinely impressive performance shape: it excels at the things most web apps actually do (mixed workloads, JSON, compression) while being memory-efficient. Its weaknesses (pipelining, uploads, H2) are in areas that matter less for typical web services.
The real story isn't "Bun fast, Node slow." It's that Bun makes different tradeoffs. JavaScriptCore over V8. Native built-ins over npm packages. Simple multi-process over clustering. And for a lot of real-world use cases, those tradeoffs pay off beautifully.
All data from HttpArena (GitHub). Tests run on identical hardware with standardized configurations. Check the repo for methodology and raw data.
Previous deep dives: Actix-web | go-fasthttp | Drogon
Top comments (1)
Another Benny banger!