I have a dirty secret - I use Bun instead of NodeJS for my own projects.
I use it because of a better developer experience and other good features.
I always take someone's claims about better performance with a grain of salt. The benchmarks and any other comparations are as much a marketing tool as anything else, especially when you don't have time to check those claims yourself. In my case I had a chance to compare Bun performace myself and I can confirm - Bun is faster than NodeJS.
If you did more or less JavaScript you probably know that the Garbage Collector is one of the top reasons of hight CPU usage and it contibutes a great deal in application performace downgrade. It is especcially true for any high loaded services, and this is usually the main reason why engineers sometimes resort to rewriting their work in other languages. This is not in particular a "javascript" problem, any garbage collected language has its own patterns to mitigate GC spikes.
Recently I had to write a test for my pet project in development DAL (Data Access Layer) to check for possible memory leaks. DAL is a proxy for relational databases and it is written in Go, which is also a garbage collected language. Initially it was written in NodeJS, but some time ago I've decided to refresh the codebase and rewrite it in Go for various reasons. And for the sake of backward compatibility with older versions I have also decided to explore the ways to integrate DAL into nodejs process, and because I use Bun, I couldn't miss the oppotunity to check their FFI interface.
Marrying Golang and JS wasn't a challenge. A simple C++ glue in NodeJS case, and FFI/dlopen in Bun's case. The main thing I am watching for is possible memory leaks. It is easy to miss some pointer during development and then the whole thing is destined to crash.
Testing Methology
In order to check for the memleaks I wrote a benchmarks that spams my library with a 100M (100 000 000) simple queries which is about 10Gb of data transfer. The benchmark tracks process' memory, and if we have a leak - the difference between RAM usage on starutup and at the end of execution would be great. The benchmark also gives an oppotunity to check how well GC is performing under the load.
OS | Mac OS, ARM64 |
Device | MacBook Air M2 |
RAM | 24Gb |
DB | SQLite 3, WAL mode |
NodeJS | v20.10.0 |
Bun | v1.1.25 |
Following query is encoded to message pack format and passed to the builder method using native bindings:
instance
.In('table')
.Find({
a: 1,
b: {
$gt: 2,
},
})
// SELECT * FROM table WHERE a = 1 AND b > 2 ;
The database responds with an error message, so we don't read anything from the disk.
The test's purpose is to check for the leaks and measure performace.
Metrics
- Process memory at Start
- Process memory at N iteration
- Average memory at the End
- Time to end
Full benchmark source can be found here.
bun bench:node #npm run bench:node
bun bench:bun
The main differences
- Bun opens the shared library (dylib) and allocates it in the extenal memory.
- NodeJS uses NAPI - C++ binding is slightly diiferent but uses the same methods.
- Both implementations work the same way and free unused memory manually when required.
- Bun did not require to write C++ glue, it provides an FFI interface.
NodeJS Performance
START
rss: 32 Mb
external: 1 Mb
buffers: 3 Mb
total: 4 Mb
Data transfered: 11539 Mb
Time to end: 1:16.586 (m:ss.mmm)
AVERAGE:
rss: 51 Mb
external: 2 Mb
buffers: 4 Mb
total: 6 Mb
Observations:
- High CPU usage (top - ~120%)
Bun Performance
START
rss: 37 Mb
external: 0 Mb
buffers: 0 Mb
total: 2 Mb
Data transfered: 11539 Mb
[46.25s] Time to end
AVERAGE:
rss: 82 Mb
external: 25 Mb
buffers: 29 Mb
total: 22 Mb
Observations:
- High CPU usage (top - ~101%)
The Result
Runtime | RSS START | Average Memory at Runtime | Time To End |
---|---|---|---|
NodeJS | 32Mb | 51Mb | 116sec |
Bun | 37Mb | 82Mb | 46sec |
Even though Bun allocated more memory at runtime, the execution time of the same script was 2.5 x faster than in NodeJS (46sec <> 116sec).
For myself, I've concluded that Bun might be cheaper in terms of the cloud costs.
Top comments (3)
Hey so, I've ran my own benchmarks against 3 different frameworks. I then tried to run these benchmarks using Bun and got almost identical results. With real-world logic, it seems that the performance benefits are not that impactful. I tested against an M4 12 core, 24g of RAM. I didn't care about RAM or processing, just outcome: time to 10000 operations, or how many operations in 2 seconds.
I am not seeing the real benefit of bun vs node in real work implementations.
Awesome insights and thanks for sharing your benchmarks! It’s great to see real-world data on how Bun stacks up against Node in memory management and speed, especially in heavy-load scenarios. I agree with you—benchmarks can sometimes be all about the marketing, so firsthand experience like yours is super valuable for the community.
Bun’s performance benefits, especially with FFI and reduced setup complexity, definitely seem appealing, and it’s impressive how well it handles intensive operations. While Node’s ecosystem remains strong, Bun’s all-in-one approach with built-in tools for TypeScript, testing, and package management gives it an edge for developers looking to streamline their workflows.
We actually wrote a piece comparing Bun and Node in detail, if you’re interested in exploring some of the design choices that set Bun apart. Here’s the link if you want to take a look: scalablepath.com/nodejs/bun-node-a...
Exciting times for JavaScript developers—thanks again for sharing your experience!
Had read from several articles that Bun is faster and will definitely try Bun.