Does native React architecture actually perform better than vanilla JS libraries with React wrappers bolted on?
I decided to find out by benchmarking six popular Gantt chart libraries that claim themselves as React solutions. Find the full test setup here:
π GitHub benchmark repo
The results prove an important principle: for React apps that rely on dynamic data, architecture has a measurable impact on performance.
Gantt Chart Components Benchmarked
Here's what I tested:
- SVAR β built as a native React component from day one, actively evolving
- DHTMLX β a powerful vanilla JS Gantt chart with a React wrapper on top
- Bryntum β another feature rich vanilla JS library wrapped for React
- Syncfusion β enterprise suite player, closed-source
- DevExtreme β same story, closed-source enterprise component
- KendoReact β actually a pure React component, but with weaker functionality
Being built in React matters more than you'd think. DHTMLX and Bryntum are vanilla JavaScript solutions that added React wrappers β an architectural approach that fundamentally shapes their performance profile.
Kendo is genuinely React inside. As for Syncfusion and DevExtreme, who knows, the source isn't available. SVAR Gantt is one of the few libraries designed specifically for modern React from the start.
We're about to see exactly how that architectural choice plays out under pressure.
A note on numbers: The exact milliseconds and frame rates will vary depending on your browser, hardware, and OS. Don't fixate on absolute values. What matters is the relative performance β who's faster and by how much. I ran all tests on the same machine with identical conditions, so the ratios hold.
Raw Loading Speed
I loaded datasets ranging from 1,000 to 100,000 tasks and measured how long each library takes to render.
| Library | 1k | 5k | 10k | 50k | 100k |
|---|---|---|---|---|---|
| SVAR | 72 ms | 140 ms | 220 ms | 900 ms | 1,700 ms |
| DHTMLX | 70 ms | 160 ms | 450 ms | 7,076 ms | 26,950 ms |
| Bryntum | 848 ms | 2,945 ms | 5,710 ms | 32,226 ms | β |
| Kendo | 4,350 ms | β | β | β | β |
| Syncfusion | 141 ms | 449 ms | 1,153 ms | error | error |
| DevExtreme | 108 ms | 933 ms | 3,141 ms | β | β |
At 1,000 tasks, most libraries are fine. But things get interesting as datasets grow: DHTMLX and SVAR stay competitive up to 10k tasks. Then DHTMLX slows down significantly. DevExtreme degrades even faster. One likely reason is that both have built-in scheduling engines that run even when you don't need them, dragging performance down for no benefit.
At 50k tasks, the field thins out fast. SVAR Gantt reaches 100k tasks in 1.7 seconds, while DHTMLX takes nearly 27 seconds.
Bryntum deserves a special note. It renders in two phases: raw data first, then a rescheduling pass, then the final paint. The βfirst paintβ numbers look better, but theyβre misleading. The browser is locked until the full cycle completes. So that 848 ms at 1k? Your user is actually frozen for that entire time.
Kendo becomes impractical beyond small datasets, and Syncfusion throws errors on larger inputs. Not slow or degraded, just crashed.
While native React components leverage React's reconciliation algorithm, vanilla JS wrappers fight against it.
CRUD Operations: Where Architecture Becomes Undeniable
This is where the React-first advantage becomes crystal clear. I ran 100 iterations of mixed CRUD operations (create, update, delete) and measured how long the whole batch takes.
| Library | 1k | 5k | 10k | 50k | 100k |
|---|---|---|---|---|---|
| SVAR | 40 ms | 95 ms | 226 ms | 585 ms | 1,100 ms |
| DHTMLX | 3,463 ms | 10,931 ms | 30,233 ms | β | β |
| DHTMLX (batch) | 211 ms | 955 ms | 1,904 ms | 12,168 ms | β |
| Bryntum | 1,816 ms | 14,936 ms | 32,272 ms | β | β |
| Bryntum (batch) | 380 ms | 1,466 ms | 2,721 ms | 14,854 ms | β |
| Kendo | 500 ms | β | β | β | β |
| Syncfusion | 2,787 ms | 28,763 ms | β | β | β |
| DevExtreme | 33,242 ms | β | β | β | β |
The React-first advantage becomes obvious.
In a true React component, rendering and data updates are separate concerns. You update state, React figures out what changed, and it repaints the minimum necessary DOM.
That's how SVAR Gantt works: without any special API, all updates are batched through normal reactive logic. Result: 40 ms for 100 CRUD operations. Essentially instant.
Now look at the vanilla JS wrappers using their default API:
- DHTMLX: 3.5 seconds (86x slower)
- Bryntum: 1.8 seconds (45x slower)
Why? Each individual operation triggers a full internal update cycle. The overhead is enormous.
Both libraries offer a batch update API to help, and it does: DHTMLX drops to 5x slower, Bryntum to 10x slower. The performance is better, but you have to know the API exists, restructure your code, and you're still much slower.
This is where architectural mismatch becomes most visible.
DevExtreme takes 33 seconds for 100 operations at 1k scale. That's not just slow, but looks like broken.
Live Updates: Real-World Performance
Gantt charts are often used in multi-user environments with real-time collaboration and live data feeds. I measured frames per second while pushing continuous updates to the component.
Real-world use case: WebSocket feeds, multiple users editing simultaneously.
| Library | 1k | 5k | 10k | 50k | 100k |
|---|---|---|---|---|---|
| SVAR | 60 fps | 30 fps | 15 fps | 5 fps | 3 fps |
| DHTMLX | 50 fps | 18 fps | 7 fps | 0 fps | 0 fps |
| DHTMLX (batch) | 60 fps | 20 fps | 7 fps | 0 fps | β |
| Bryntum | 30 fps | 10 fps | 5 fps | 0 fps | β |
| Bryntum (batch) | 25 fps | 8 fps | 0 fps | β | β |
| Kendo | 10 fps | β | β | β | β |
| Syncfusion | 5 fps | 0 fps | β | β | β |
| DevExtreme | 1 fps | 0 fps | 0 fps | β | β |
Under continuous updates, the differences become even more visible. SVAR and DHTMLX hit 60 fps at scale. Other libraries lag behind and flatline at 0 fps when handling bigger datasets.
Why? Because they're fighting React's update model, not leveraging it.
For apps that need real-time data, native React components aren't optionalβthey're essential.
The Architectural Lesson
While wrappers around existing JS code can provide good rendering speed, they fail for more complex scenarios. The SVAR React Gantt wins in three critical categories: loading speed, CRUD operations, and live updates.
Notice the pattern? All three are directly tied to how the component integrates with React's update model.
- Loading speed β React reconciliation wins
- CRUD operations β Native state management wins
- Live updates β proper reactive architecture wins
The "proper" is an important point here, as Kendo Gantt lags behind miserably being written in "the same" react.
Vanilla JS engines with wrappers can perform well in isolated rendering tasks, but they are less aligned with how modern React applications manage dynamic data.
The real takeaway: architecture matters. A library built for React from day one, using React's own update model, will beat a ten-year-old JavaScript engine with a React wrapper bolted onβat least for the things that matter most in modern apps.
What This Benchmark Doesnβt Cover
One important area is not included here: scheduling logic. Auto-scheduling (dependencies, constraints, critical path calculation) is a different type of workload. A library that is slower at rendering might still excel at scheduling, and vice versa.
For example, Bryntum focuses heavily on advanced scheduling capabilities, which deserve a separate evaluation. A dedicated scheduling benchmark could lead to different conclusions depending on the use case.
Want the Full Picture?
This article covers the three metrics where native React shines. I also tested scrolling performance and memory usage, where the story gets more nuanced.
See the complete benchmark with all five metrics, detailed analysis, and interactive charts:
π Full React Gantt Chart Benchmark
The full analysis includes scrolling performance (where DHTMLX and Bryntum excel), memory efficiency, and an interactive demo you can run on your own hardware.



Top comments (0)