Most JSI benchmarks are misleading.
They benchmark the native call itself, but not the actual cost of moving data into JavaScript.
In real React Native applications, the bottleneck is often not JSI itself.
It’s the shape of the data crossing the boundary.
Returning thousands of JavaScript objects from native code creates allocations, property definitions, boxing, hidden classes, and garbage collection pressure. Even with JSI removing the traditional bridge, large payloads can still become surprisingly expensive.
After profiling several heavy JSI workloads, I started experimenting with a different approach:
- no arrays of objects
- no JSON serialization
- no parsing
- no copies
Just one contiguous ArrayBuffer.
That experiment became react-native-columnar.
The Problem
A typical JSI module often returns something like this:
jsi::Array array(rt, rows);
for (uint32_t i = 0; i < rows; ++i) {
jsi::Object obj(rt);
obj.setProperty(rt, "id", i);
obj.setProperty(rt, "status", 2);
obj.setProperty(rt, "isActive", true);
obj.setProperty(rt, "createdAt", 1710000000000.0);
obj.setProperty(rt, "updatedAt", 1710000000000.0);
array.setValueAtIndex(rt, i, std::move(obj));
}
return array;
At first glance, this looks perfectly fine.
But under the hood, JavaScript engines still need to:
- allocate every object
- create property storage
- box primitive values into JS values
- track allocations for GC
- maintain hidden classes / object shapes
- resolve property accesses
JSI removes the old React Native bridge.
It does not remove JavaScript object creation cost.
When datasets grow large, this overhead becomes dominant very quickly.
The Benchmark
This benchmark was measured on an iPhone 16 Pro using Hermes in release mode.
10,000 iterations · 5 columns:
id (int32) | status (uint8) | isActive (uint8) | createdAt (double) | updatedAt (double)
| Rows | Array of objects | react-native-columnar | Speedup |
|---|---|---|---|
| 100 | ~418.81 ms | ~14.96 ms | 27× |
| 500 | ~2079.81 ms | ~22.06 ms | 94× |
| 1000 | ~4360.11 ms | ~35.89 ms | 121× |
| 2000 | ~9444.47 ms | ~45.39 ms | 208× |
The most interesting part is not just the speedup itself.
It’s how differently both approaches scale.
The object-based version keeps paying for:
- allocations
- property creation
- GC pressure
- object graph growth
Meanwhile, the ArrayBuffer path mostly performs sequential memory writes into one contiguous block.
The overhead stays comparatively flat.
The Core Idea
Instead of returning rows as objects:
[
{ id, status, createdAt },
{ id, status, createdAt },
{ id, status, createdAt }
]
react-native-columnar stores values by column:
[id, id, id, id]
[status, status, status]
[createdAt, createdAt]
All columns live inside one contiguous ArrayBuffer.
On the JavaScript side, every column becomes a typed array view pointing directly into the same memory.
No copies.
No parsing.
No per-row objects.
Why Columnar Layouts Are Fast
This idea is not new.
High-performance systems already rely heavily on columnar memory layouts:
- Apache Arrow
- DuckDB
- ClickHouse
- analytical databases
Modern CPUs love predictable contiguous memory access.
Sequential memory is cache-friendly.
Object graphs are not.
Arrays of objects scatter data across memory and force the engine to constantly chase pointers and metadata.
Columnar layouts keep values densely packed.
This improves:
- cache locality
- sequential reads
- memory efficiency
- typed array performance
- SIMD/vectorization opportunities
Zero-copy Transport
The C++ side writes directly into one binary buffer:
ColumnarWriterBuilder<UserSchema> builder(rows);
auto cols = UserSchema::createColumns(builder);
for (uint32_t i = 0; i < rows; ++i) {
cols.id[i] = dbRow[i].id;
cols.status[i] = dbRow[i].status;
cols.isActive[i] = dbRow[i].isActive;
cols.createdAt[i] = dbRow[i].createdAt;
cols.updatedAt[i] = dbRow[i].updatedAt;
}
return builder.toArrayBuffer(rt);
The JavaScript side reads typed array views over the same memory:
const [header, columns] = createBufferReader(buffer, USER_SCHEMA);
const [
idCol,
statusCol,
isActiveCol,
createdAtCol,
updatedAtCol
] = columns;
const id = idCol[0];
No serialization step exists.
The data is already in its final binary form.
That’s the key difference.
Why ArrayBuffer Changes Everything
ArrayBuffer is extremely cheap compared to object graphs.
JavaScript engines are highly optimized for typed arrays because they represent predictable contiguous memory.
Unlike objects, typed arrays:
- avoid property allocation
- avoid hidden class creation
- avoid boxing overhead
- avoid deep object graphs
- minimize GC work
In practice, ArrayBuffer transport becomes surprisingly close to “native memory exposed to JS”.
That drastically reduces boundary overhead.
Best Use Cases
react-native-columnar is intentionally specialized.
It works best for large dense numeric datasets such as:
- SQLite result sets
- frame processor outputs
- sensor streams
- realtime charts
- analytics pipelines
- image processing
- large JSI payloads
Anywhere you move lots of numeric data between native and JS.
Tradeoffs
This approach also has tradeoffs.
It is not a replacement for regular JavaScript objects.
You lose some ergonomics in exchange for performance.
Current limitations include:
- numeric-only design
- no native string support
- no nested objects
- no nullable values
- schema synchronization between C++ and JS
Debugging raw binary layouts is also harder than debugging plain objects.
But for performance-critical paths, the gains can be dramatic.
Final Thoughts
One of the biggest lessons while building this library was realizing that JSI performance is not only about native execution speed.
The shape of the data crossing the boundary matters just as much.
Sometimes more.
If you move large datasets between C++ and JavaScript, arrays of objects can easily dominate the total cost.
ArrayBuffer changes the equation completely.
JSI is already fast.
But your data layout can still make it slow.
Top comments (0)