DEV Community

Cover image for The Real Cost of UI Components Revisited
Ryan Carniato for This is Learning

Posted on

The Real Cost of UI Components Revisited

With my focus recently going back to look at optimization for the Solid 1.0 release, I thought I'd revisit my The Real Cost of UI Components article. When I wrote the original article I wasn't really sure what I'd find and I was a bit cautious not wanting to offend anyone. I let every framework have their showcase at level 0 and then just built on that.

The shortcoming of not equalizing the implementations is I didn't actually show the tradeoffs of the Virtual DOM and I completely glazed over the overhead of Web Components. So I wanted to look at this again with that in mind.

Why now? I recently have been benchmarking Stencil and the new Lit. And it was sort of bugging me since neither of these support Native Built-ins. This is a problem since with a benchmarks using HTMLTableElements meant they can't just insert random Custom Elements. So these implementations were all done in a single large Component. I wanted to see if I could better approximate the way these scale.

Mandatory Disclaimer: I wrote Solid, but I did not create this benchmark. Take it for what it is. I hope that your takeaway is more than Solid is fast. Different technologies scale differently and that should be where the focus is.

The Setup

The test is once again a modification of JS Frameworks Benchmark. This is our TodoMVC app on steroids. It will blast our implementations with some absurd data but we will quickly be able to see any bottlenecks.

The important thing to note is given the limitation of around Native built-ins we will be using hand optimized Web Component solutions. This means better performance than you'd typically find for Lit. So things are slightly skewed in its favor but it's the best I can do.

When I first started I did the tests on the new M1 Macbook Air but given the issues with applying CPU throttling(known issue) I also ran these on a Intel i7 Macbook Pro. This muddies the narrative a little but it can help view the difference between running on the latest greatest and on a slower device(via CPU throttling).

The Scenarios

  • Level 1: The whole benchmark is implemented in a single Component.
  • Level 2: A Component is made per row and per button.
  • Level 3: Each row is further subdivided into Cell Components for each of the four table columns and the remove Icon is also made into a Component.

The Contenders

1. Inferno: The one of the quickest Virtual DOM library around. While different than React, it boasts React compat and will serve as our proxy for VDOM libraries in this test. Source [1, 2, 3]

2. Lit: Google-backed Tagged Template render library. Given the lack of support for Native Built-ins I'm using optimized hand written Custom Element wrappers. I also kept explicit event delegation in which is an advantage compared to every non-vanilla implementation. Source [1, 2, 3]

3. Solid: Fastest runtime reactive library. It's Components are little more than factory functions so this should serve as a good comparison. Source [1, 2, 3]

4. Svelte: Generates the smallest bundles with clever use of its compiler. It has its own component system as well. Source [1, 2, 3]

5. vanillajs: Not a framework just the core implementation. I take the standard implementation and then layer on Web Components as we level up. [1, 2, 3]

Benchmarking

Instead of focusing a framework at a time I think it will be easier to just look at this in terms of levels. Relative positioning speaks a lot more to the trends. Since our baseline is moving with us by using Vanilla JS with Web Components, even though libraries are getting slower as we add more components by how much differs.

We are going to make heavy use of looking at the averaged geometric mean(the bottom row) to holistically look at how these libraries compare. It is important to look at the individual results for more information but this gives us an easy way to determine relative positioning.

Level 1 - All in One

One component/app is all you get. While for most libraries this is the most optimal version this is not true of the VDOM where components are really important for managing update performance.

M1
Level 1 - M1

Intel w/ Slowdowns
Level 1 - Intel

This is probably the worst you've ever seen Inferno perform and it's not its fault. This is what would happen if everyone wrote VDOM code the way it is described in Rich Harris' The Virtual DOM is pure overhead. Hopefully most people don't do that. It actually isn't bad for most things but really takes a hit on the selection benchmark and where the updates are more partial.

Level 2 - Rows and Buttons

This is what I'd consider the pretty typical scenario for a lot of frameworks in terms of the component breakdown. The VDOM now has enough components to operate.

M1
Level 2 - M1

Intel w/ Slowdowns
Level 2 - Intel

Thanks to adding Web Components to Vanilla the gap between it and Solid has disappeared. Inferno is significantly faster now that it has enough components. The gap between Lit, Svelte, and Vanilla are keeping pace. So it looks like their components have comparable cost.

Level 3 - Components `R Us

At this level every table cell is a Component. This breakdown might seem a bit extreme to some. In Virtual DOM land we are used to this sort of wrapping. Things like Styled Components and Icon libraries push us to these patterns without flinching. Just how expensive is this?

M1
Level 3 - M1

Intel w/ Slowdowns
Level 3 - Intel

Adding Web Components to our optimal Vanilla JS has actually made it more expensive than the equivalent Solid example. Inferno has now closed the gap considerably with Vanilla JS. And Svelte and Lit have continue to drop a few more points. On the slower system, Svelte is really getting hurt at this point by it's memory usage on benchmarks like clear rows:

Intel w/ Slowdown
Level 3 Memory - Intel

Conclusions

I feel like a broken record but really we shouldn't be comparing Web Components to JavaScript Framework components. They serve a different purpose and performance is not a place that they can win. There is nothing wrong with that once you understand they aren't the same thing.

If anything this test was setup in Web Components favor. There is no Shadow DOM or extra elements inserted. Those things you would find in real world would make them even heavier solution. I didn't want any contention so I kept in things like explicit event delegation which only benefits Lit in this test. This is really the most optimistic look at Web Components.

It might not always be this way to be sure. Web Component performance has improved in the 2 years since I last tested. But it isn't as simple as saying use the platform. As it turns out all JavaScript frameworks use the platform, just some more efficiently than others. It's a delicate balance between platform for standards sake, and using it only so far as it is empirically beneficial. There are way more factors than performance here.

But it is pretty clear that frameworks that scale with well with more components, such as Virtual DOM libraries like React or Inferno or "component-less" libraries like Solid, don't experience as much overhead.

This doesn't come as much as revelation to me this time around. But maybe by looking at a few numbers we can better extrapolate where we should cautious. This is just a brutal microbenchmark that only really shows us the framework level bottlenecks and the real ones happen usually in our user code. But for those looking to evaluate on pure technological approach maybe there is some value here.


Results in a single table Intel w/ Slowdowns

Full table - Intel

Top comments (7)

Collapse
 
132 profile image
Yisar

github.com/yisar/fre/blob/master/d...

You can add fre to this test, but you need to set sync to true when calling render function. Because fre uses the optimization of off-screen rendering, how fast it is depends on the space speed of the computer.

Collapse
 
ryansolid profile image
Ryan Carniato

I'd like to add fre to the official benchmark. Depending on how it compares to ivi, and Inferno it might be suitable candidate to represent "team VDOM" in future articles.

As for the test I was doing in the article I'm positive fre would do quite well. If people read between the lines Inferno has the best scaling between V2-V3 and my findings have been except on memory VDOM libraries scale the best with Components. Solid takes a hit because it breaks apart the number of elements we can clone together, so it basically neutralizes one of Solid's performance benefit for large creations where Inferno or most VDOM is mostly untouched. That being said where Solid started it had more room to lose. Same thing with Vanilla JS. So the fact it stays a head is still a good sign for the approach.

But depending on how fre does compared to Inferno we could have quite a good battle happen. If you look at my previous article Solid and VDOM library ivi were neck and neck. And it makes sense as most of Solid's optimizations get neutralized by having a ton of components, but at least there isn't much in the way of additional overhead.

I'm really waiting to put Marko through this since Marko Next's compiler recombines the templates so it might be the first library to actually have zero overhead on components. It literally compiles them out.

Collapse
 
132 profile image
Yisar • Edited

When we talk about performance, I want to find the bottleneck that causes the slow speed first.

In a mobile app, the real performance bottleneck is

  1. The initialization time of WebView
  2. The response time of HTTP.

These bottlenecks have nothing to do with JS, and SSR cannot solve them, too.

So we hope to solve them from the native rendering engine level. We pre initialize WebView in advance and pre download resources in advance. This requires a multi-threaded rendering engine, which can ensure JS to perform rendering with UI concurrently.

Collapse
 
trusktr profile image
Joe Pea

This is great. We need more of these tests that are more realistic: components used inside components used inside components.

The last table before "Conclusions" seems not to be sorted by geometric mean like all the others.

Collapse
 
ryansolid profile image
Ryan Carniato

Yeah it is the memory table for the performance table right above it. I took the screenshot at the same time without re-ordering it. But you are the second person to point this out. So I guess that part isn't obvious.

Collapse
 
henryong92 profile image
Dumb Down Demistifying Dev

Would love to hear more about Stencil.js comparisons as it follows the Decorator pattern which serves to be more opiniated than many other frontend frameworks, other than Angular.
Am curious if there'll be plans for SOLID to further improve for scalability and maintainability in mind.
Thank you for all the hard work for this wonderful framework! 🎉

Collapse
 
ryansolid profile image
Ryan Carniato • Edited

Well the reason Stencil influenced this was I couldn't get the score I wanted out of it. It's basically the poster child for the problem of conflating Web Components with Framework components.

If you look at Inferno you can see that the VDOM scales the best with components. Solid moves .06 from #1 - #3 but Inferno's #2 (which is its best) - #3 is .02. But Inferno is much worse if you put everything in a single component like in #1. VDOM's work best with enough components to break apart the updates and Stencil uses a VDOM.

So since it doesn't support Native Built-ins it was impossible to have components per row, something that would help any VDOM implementation perform better. In fact, if you look at it this way a VDOM in Web Components is self-cancelling. If you need more components to improve performance and Web Components are heavy and reduce performance both sides are basically fighting each other.

It's very similar to building a reactive framework that runs in a VDOM. And if you look at the official results Vue has the exact same problem as Stencil with some of the worst Select rows scores on the whole table. Sometimes 2 great technologies are designed in a way that are fundamentally at odds with each other.

Basically if your components don't scale it is in your interest in that benchmark to sacrifice partial update performance rather than take a hit on creation by making more components. But to me that is really awkward, and a red flag from a performance scalability consideration since real applications have components.


When you say scalability I'm gathering you mean project size not performance (which is what I'm showing here). I think it is a matter of building it up. Solid's built off primitives. The whole thing, even the internals, use the same reactive primitives the developer uses. So it's a lot like lego and my approach to build it out follows that thinking.

As we near 1.0 Solid has everything it needs from a fundamentals standpoint as anything else will just be composed on top of it using the same building blocks. This approach by its very nature lends to the "just a library" mentality but I don't think that will fly given how crowded the wider ecosystem is.

So next phase is going to be focusing on essential areas of development like Router, Query Caching, Isomorphic App Starter. It's going to take a while on the official path but I intend to give the same level of care I've given to the core on these newer additions.