TL;DR: I built bun-image-turbo, a Rust-powered image processing library that outperforms Sharp in most benchmarks. It's 36x faster at metadata extraction, 4.5x faster under concurrent load, and has built-in Blurhash support. Works seamlessly with Bun and Node.js.
Stop Waiting for Your Images to Process
Every millisecond counts. Your users are waiting. Your server is sweating.
You're processing 1,000 product images for your e-commerce site. With Sharp, that's 10 seconds of metadata extraction. With bun-image-turbo? Under 300ms.
That's not a typo. 36x faster.
The Problem with Image Processing in Node.js
We've all been there. You're building an image-heavy application - maybe an e-commerce platform, a CMS, or a social media app. You reach for Sharp because it's the go-to solution.
But then you hit these walls:
- Metadata extraction feels slow when processing thousands of images
- Your server struggles under concurrent image operations
- You need Blurhash placeholders but have to add another dependency
- Complex transform pipelines eat up precious milliseconds
- You want to try Bun but your image library doesn't fully support it
I decided to solve this. From scratch. In Rust.
Introducing bun-image-turbo
Works with Bun 1.0+ AND Node.js 18+ - Use the same code everywhere
# Bun
bun add bun-image-turbo
# Node.js / npm
npm install bun-image-turbo
# yarn
yarn add bun-image-turbo
# pnpm
pnpm add bun-image-turbo
A high-performance image processing library built with:
- Rust for blazing fast performance
- TurboJPEG (libjpeg-turbo) with SIMD acceleration
- napi-rs for seamless Node.js/Bun integration
- Zero-copy buffers for minimal memory overhead
- 100% TypeScript with full type definitions
The Benchmarks Don't Lie
"Show me the numbers" - Every developer ever
Tested on Apple M1 Pro with Bun 1.3.3 vs Sharp v0.34.5:
┌─────────────────────────────────────────────────────────────────────────────┐
│ bun-image-turbo vs sharp Performance │
├─────────────────────────────────────────────────────────────────────────────┤
│ Metadata Extraction █████████████████████████████████████ 36x faster │
│ Transform Pipeline ████████████████████ 3.4x faster │
│ Concurrent (50 ops) ██████████████████████████ 4.5x faster │
│ JPEG Encode ██████████████ 1.9x faster │
│ Thumbnail Resize ████████████ 1.9x faster │
│ Blurhash Generation ████████████████████ (4,283 ops/sec) N/A in sharp │
└─────────────────────────────────────────────────────────────────────────────┘
The Numbers (ops/sec - higher is better)
| Operation | bun-image-turbo | sharp | Winner |
|---|---|---|---|
| Metadata Extraction | 350,000 ops/sec | 9,600 ops/sec | 36x faster |
| Transform Pipeline | 454 ops/sec | 134 ops/sec | 3.4x faster |
| Concurrent (50 ops) | 1,653 ops/sec | 364 ops/sec | 4.5x faster |
| JPEG Encode | 553 ops/sec | 287 ops/sec | 1.9x faster |
Why 36x Faster Metadata?
Sharp decodes the entire image to extract metadata. bun-image-turbo only reads the header bytes. For a 10MB image, that's the difference between reading 10MB vs ~100 bytes.
Your Server Will Thank You
Before (Sharp):
📊 Processing 1000 user uploads...
⏱️ Metadata: 10.4s
⏱️ Thumbnails: 45.2s
⏱️ WebP conversion: 38.1s
💀 Server CPU: 98%
After (bun-image-turbo):
📊 Processing 1000 user uploads...
⏱️ Metadata: 0.28s (36x faster)
⏱️ Thumbnails: 23.5s (1.9x faster)
⏱️ WebP conversion: 19.8s (1.9x faster)
😎 Server CPU: 45%
Dead Simple API
The API is designed to be intuitive and familiar:
import { resize, toWebp, metadata, transform, blurhash } from 'bun-image-turbo';
// Get metadata (350,000 ops/sec!)
const info = await metadata(buffer);
console.log(`${info.width}x${info.height} ${info.format}`);
// Resize with high-quality Lanczos3
const thumb = await resize(buffer, { width: 200 });
// Convert to modern formats
const webp = await toWebp(buffer, { quality: 85 });
// Complex pipeline in ONE call (3.4x faster than Sharp)
const result = await transform(buffer, {
resize: { width: 400, height: 300, fit: 'cover' },
rotate: 90,
grayscale: true,
sharpen: 10,
output: { format: 'webp', webp: { quality: 80 } }
});
// Built-in Blurhash (4,283 ops/sec) - No extra dependency!
const { hash } = await blurhash(buffer, 4, 3);
// "LEHV6nWB2yk8pyo0adR*.7kCMdnj"
Sync APIs for When You Need Them
import { resizeSync, metadataSync, transformSync } from 'bun-image-turbo';
// Blocking operations when async isn't needed
const info = metadataSync(buffer);
const thumb = resizeSync(buffer, { width: 200 });
Server Workloads? We Got You
The 4.5x improvement under concurrent load is where bun-image-turbo really shines for production servers:
// Process 50 images concurrently
const results = await Promise.all(
images.map(img => resize(img, { width: 800 }))
);
// bun-image-turbo: 30ms total (1,653 ops/sec)
// sharp: 137ms total (364 ops/sec)
This means your API can handle 4.5x more image requests before you need to scale.
Real-World Impact
| Scenario | Sharp | bun-image-turbo | Savings |
|---|---|---|---|
| 10K daily uploads | 2 servers | 1 server | $50/month |
| 100K daily uploads | 8 servers | 2 servers | $300/month |
| 1M daily uploads | 40 servers | 10 servers | $1,500/month |
Estimated based on 4.5x throughput improvement
Works Everywhere
Runtime Support:
- ✅ Bun 1.0+ - First-class support, optimized for Bun
- ✅ Node.js 18+ - Full compatibility, same API
Platform Support:
| Platform | Architecture | Status |
|---|---|---|
| macOS | ARM64 (M1/M2/M3) | ✅ |
| macOS | x64 (Intel) | ✅ |
| Linux | x64 (glibc) | ✅ |
| Linux | x64 (musl/Alpine) | ✅ |
| Linux | ARM64 | ✅ |
| Windows | x64 | ✅ |
| Windows | ARM64 | ✅ |
7 prebuilt binaries - No compilation needed. Just npm install and go.
Migration from Sharp is Easy
// Before (Sharp)
import sharp from 'sharp';
const result = await sharp(buffer)
.resize(400, 300)
.rotate(90)
.grayscale()
.webp({ quality: 80 })
.toBuffer();
// After (bun-image-turbo)
import { transform } from 'bun-image-turbo';
const result = await transform(buffer, {
resize: { width: 400, height: 300 },
rotate: 90,
grayscale: true,
output: { format: 'webp', webp: { quality: 80 } }
});
Same result. 3.4x faster.
When to Use bun-image-turbo
Use it when you need:
- Fast metadata extraction (image galleries, validation)
- High-concurrency image processing (APIs, workers)
- Transform pipelines (thumbnail generation)
- Blurhash placeholders (lazy loading)
- Bun OR Node.js compatibility
- Production performance at scale
Stick with Sharp when you need:
- Animated GIF processing
- Advanced color space manipulation
- SVG rendering
Get Started in 30 Seconds
With Bun:
import { transform } from 'bun-image-turbo';
const input = await Bun.file('photo.jpg').arrayBuffer();
const optimized = await transform(Buffer.from(input), {
resize: { width: 1200 },
output: { format: 'webp', webp: { quality: 85 } }
});
await Bun.write('photo.webp', optimized);
With Node.js:
import { transform } from 'bun-image-turbo';
import { readFile, writeFile } from 'fs/promises';
const input = await readFile('photo.jpg');
const optimized = await transform(input, {
resize: { width: 1200 },
output: { format: 'webp', webp: { quality: 85 } }
});
await writeFile('photo.webp', optimized);
Why I Built This
I was building an image-heavy SaaS and hit performance walls with existing solutions. The metadata extraction was killing my upload pipeline. Concurrent processing was melting my servers.
So I did what any reasonable developer would do: rewrote it in Rust.
6 months, 10,000 lines of Rust, and countless benchmarks later - here we are.
Links
What's Next?
- AVIF support (coming soon)
- Streaming API for large files
- More filter options
- WebAssembly build for edge runtimes
Star the repo if this helps you! ⭐
The Bottom Line
| Metric | Improvement |
|---|---|
| Metadata extraction | 36x faster |
| Concurrent operations | 4.5x faster |
| Transform pipelines | 3.4x faster |
| JPEG encoding | 1.9x faster |
| Server costs | Up to 75% less |
Stop leaving performance on the table. Your images deserve better.
npm install bun-image-turbo
Built with Rust, powered by libjpeg-turbo, made for the modern JavaScript ecosystem.
Works with Bun. Works with Node.js. Just works.
Tags: #javascript #rust #nodejs #bun #webdev #performance #opensource #imageprocessing #webp #optimization
Top comments (0)