Bun just joined Anthropic.
Claude Code now ships as a Bun executable to millions of users. This isn't just news — it's a signal. The JavaScript ecosystem is shifting.
But here's what nobody's talking about:
Sharp wasn't built for this future.
The Problem With Sharp
Don't get me wrong — Sharp is excellent. It's battle-tested. It powers thousands of production apps.
But it has limitations:
-
No native HEIC support out-of-the-box
- Requires building libvips from source
- Custom Docker layers for Lambda
- Complex CI/CD pipelines
-
Metadata extraction is slow
- Decodes the entire image just to read dimensions
- For a 10MB image, that's reading 10MB instead of ~100 bytes
-
Heavy under concurrent load
- Not optimized for modern serverless architectures
I hit all these walls while building an image-heavy application. Processing thousands of images was slow. Servers were expensive. Users were waiting.
So I did something about it.
Introducing bun-image-turbo
A Rust-powered image processing library designed for Bun and Node.js from day one.
100% open source. MIT licensed. Free forever.
npm install bun-image-turbo
That's it. No custom builds. No compilation. Native HEIC support included.
The Benchmarks
Tested on Apple M1 Pro with Bun 1.3.3:
| Operation | Sharp | bun-image-turbo | Improvement |
|---|---|---|---|
| WebP Metadata | 3.4ms | 0.004ms | 950x faster |
| JPEG Metadata | 0.1ms | 0.003ms | 38x faster |
| PNG Metadata | 0.08ms | 0.002ms | 40x faster |
| 50 Concurrent Ops | 160ms | 62ms | 2.6x faster |
| Transform Pipeline | 19.1ms | 12.2ms | 1.6x faster |
| HEIC Support | ❌ (needs custom build) | ✅ Native | ∞ |
Why 950x Faster Metadata?
Most libraries decode the entire image to extract metadata.
Sharp approach:
10MB image → Decode all 10MB → Extract width/height → Done
Time: 3.4ms
bun-image-turbo reads only the header bytes:
bun-image-turbo approach:
10MB image → Read ~100 bytes header → Extract width/height → Done
Time: 0.004ms
Same result. 950x less work.
The Full Optimization Stack
1. Header-Only Metadata
Only read what you need. Don't decode pixels for metadata.
2. Shrink-on-Decode
For JPEG and HEIC, decode directly at reduced resolution:
4000px original → Need 200px thumbnail
Sharp: Decode 4000px → Resize to 200px
bun-image-turbo: Decode at 500px (1/8 scale) → Resize to 200px
Fewer pixels = faster processing.
3. Multi-Step Resize
For large downscales, progressive halving is faster:
4000px → 2000px → 1000px → 500px → 200px
Each step uses Box filter (optimal for downscaling).
4. Adaptive Algorithm Selection
Automatically selects the best filter:
| Scale Factor | Algorithm | Why |
|---|---|---|
| >4x downscale | Box | Fastest, good averaging |
| 2-4x downscale | Bilinear | Fast, acceptable quality |
| <2x downscale | Lanczos3 | Best quality |
5. TurboJPEG with SIMD
Uses libjpeg-turbo with hardware acceleration:
- SSE2/AVX2 on x86
- NEON on ARM
2-6x faster than standard libjpeg.
Real-World Impact
Before (Sharp):
📊 Processing 1000 user uploads...
⏱️ Metadata: 10.4s
⏱️ Thumbnails: 45.2s
⏱️ WebP conversion: 38.1s
💀 Server CPU: 98%
After (bun-image-turbo):
📊 Processing 1000 user uploads...
⏱️ Metadata: 0.28s (37x faster)
⏱️ Thumbnails: 23.5s (1.9x faster)
⏱️ WebP conversion: 19.8s (1.9x faster)
😎 Server CPU: 45%
Translation: Fewer servers. Lower costs. Happier users.
Quick Start
import {
metadata,
resize,
transform,
toWebp,
blurhash
} from 'bun-image-turbo';
// Read image
const buffer = Buffer.from(await Bun.file('photo.jpg').arrayBuffer());
// Get metadata (950x faster than Sharp!)
const info = await metadata(buffer);
console.log(`${info.width}x${info.height} ${info.format}`);
// Resize with shrink-on-decode
const thumbnail = await resize(buffer, { width: 200 });
// Full transform pipeline
const result = await transform(buffer, {
resize: { width: 800, height: 600, fit: 'cover' },
rotate: 90,
grayscale: true,
sharpen: 10,
output: { format: 'webp', webp: { quality: 85 } }
});
// Built-in Blurhash (Sharp doesn't have this!)
const { hash } = await blurhash(buffer, 4, 3);
// Save
await Bun.write('output.webp', result);
Works with Node.js too:
import { readFileSync, writeFileSync } from 'fs';
import { transform } from 'bun-image-turbo';
const buffer = readFileSync('photo.jpg');
const result = await transform(buffer, {
resize: { width: 800 },
output: { format: 'webp' }
});
writeFileSync('output.webp', result);
Platform Support
7 prebuilt binaries. No compilation needed.
| Platform | Architecture | Supported | HEIC |
|---|---|---|---|
| macOS | ARM64 (M1/M2/M3/M4) | ✅ | ✅ |
| macOS | x64 (Intel) | ✅ | ❌ |
| Linux | x64 (glibc) | ✅ | ❌ |
| Linux | x64 (musl/Alpine) | ✅ | ❌ |
| Linux | ARM64 | ✅ | ❌ |
| Windows | x64 | ✅ | ❌ |
| Windows | ARM64 | ✅ | ❌ |
What's Coming Next
This is just v1.2.0. Here's the roadmap:
- 🔜 AVIF write support
- 🔜 Streaming API for large files
- 🔜 More filters & effects
- 🔜 WebAssembly build for edge runtimes
- 🔜 Even more performance optimizations
Why Open Source?
I could have kept this proprietary. Built a SaaS around it.
But the JavaScript ecosystem gave me everything. Node.js. Bun. npm. Thousands of open source packages.
This is my contribution back.
100% open source. MIT licensed. Free forever.
Use it. Fork it. Contribute. Make it better.
Links
📖 Documentation: nexus-aissam.github.io/bun-image-turbo
⭐ GitHub: github.com/nexus-aissam/bun-image-turbo
📦 npm: npm install bun-image-turbo
The Bottom Line
| What You Get | Value |
|---|---|
| Metadata extraction | 950x faster |
| Concurrent operations | 2.6x faster |
| Transform pipelines | 1.6x faster |
| HEIC support | Native (no custom builds) |
| Blurhash | Built-in |
| Price | Free forever |
Bun joined Anthropic. The ecosystem is evolving.
Build for the future.
npm install bun-image-turbo
Got questions? Drop them in the comments.
Found this useful? Star the repo ⭐
Top comments (0)