The idea behind a virtual DOM is clear: Build an in-memory representation of the dom and just apply the differences to the "real" DOM. This makes p...
For further actions, you may consider blocking this person and/or reporting abuse
VDOM has never been faster to execute than tightly hand crafted Javascript, but writing such code is a nightmare, and it's been much faster to build apps, component libraries etc with frameworks that use VDOM that still have great performance.
VDOM lets you have components which are re-runnable factories. When something changes you re-run the factory and it spits out VDOM nodes, these lightweight Javascript objects are then compared with the ones spit out last time and only the differences are applied.
Hand crafted reactive Javascript will always be faster than this, but at the cost of real complexity and much harder to reason with code.
Libraries like SolidJS work by making a much nicer reactive interface that still allows you to write only simple Javascript that knows which DOM elements to update when things change. In SolidJS components are never re-run after the initial mounting, instead reactive logic is used to change DOM attributes or nodes when things change in the state. This makes the code slightly more complex and introduces restrictions you must adhere to (for instance not using JS destructuring of
propsto components) compared to VDOM implementations.So you have libraries like SolidJS and Svelte (more of a modified language) which are looking to create highly reactive components and libraries like Inferno, Preact, React etc etc that use VDOM.
In widely touted benchmarks (I borrowed a slightly out of date one of Ryan Carniato's examples below - Ryan is the author of Solid) the two closest to native Javascript are Solid and Inferno. Given how fast these are it's hard to imagine needing to write native JS because it would be so much harder to reason with. From this we can see VDOM can be crazy fast and reactive can be crazy fast and you know what? React isn't in that top ranking but actually it's also still fast enough for most people.
So do we need VDOM? Well I think that principles like reactivity (with Hooks in React and signals in Solid for instance) really help us construct our applications. I won't be going back to hand crafted JS for sure...
In fact, there is a lot of code out there that is a nightmare, but does this mean, we can´t write good code in Javascript? Do we need a virtual DOM because people are not willing to learn to write better code?
We have done some really nice applications recently, that are directly working on the DOM. Just plain event based Javascript. And it came out they work like a charm.
Well yep, I can see that could happen. I think you should certainly write the code you need. If you need the best possible performance or you can easily reason with your problem then that could well be the best way to go. It's all about the mental models you need to apply and at which level you should apply them right? Whatever gets the job done well enough, fast enough sounds good to me.
Sure, but there are reasons, why people developed frameworks like Angular or React years ago. For me it seems, that things have changed over the last 5 years and many of the reasons are gone. But the frameworks still are there.
This is a very interesting idea to me. I’d like to hear more about it. Maybe you could write a new post about what changes over the past 5 years have made frameworks less vital. I would read it and click the heart.
I suppose there are good reasons for every side in this discussion. As there are different tasks to solve, different tools might be useful. Read more about the backgrounds here.
Best regards
VDOM is just an implementation detail. The real goal here is declarative programming.
DOM is imperative and statefull so we need some framework/library to transform our declarative code into imperative commands that DOM provides.
This translation can be done entirely at runtime with something like VDOM diffing but most of the work can be done ahead of time (at build/compile time) and that's the current trend.
Relying on the DOM for declarative updates (essentially by recreating the DOM tree whenever there's a change) isn't a good idea since the DOM wasn't created to work like this so there are lots of problems with this approach.
The issue is that React specifically and VDOM generally imposes a "run time cost" for the developer convenience of "declarative design".
Given that the browser is the native application and JavaScript is just (the highly optimized) automation language to coordinate user interaction, client-side development really needs a more lean and mean mindset (The Mobile Performance Inequality Gap, 2021).
Given that situation developer conveniences should not impose run time costs but instead rely on design time tools to compile to an effective machine representation - i.e. code that manipulates the DOM efficiently and effectively without relying on runtime tricks like VDOM.
In a way SolidJS is a move in that direction but there is room for improvement tooling-wise - the more work can be done at design/compile time the better.
The other issue is that the claim that "React is declarative" is widely publicized but largely unrealized in code bases and wouldn't hold up under close scrutiny. People see JSX tags and say - "Ooohh! It's declarative" ignoring the all the imperative JavaScript code surrounding and permeating it.
CSS is declarative and many developers hate it.
When you write a React component you're writing imperative code that generates declarative code (a description of a view a.k.a VDOM). But what's the value in this?
An app can be thought of as a state machine: you start at 0 and from there you can typically go to any other state and from that state to some different arbitrary state. For each of these states you have to create a UI view that represents a given state.
Doing this naively you'd have to create code that updates the view from all possible state changes. If you have just 0 (initial) + 3 states it gives
3 + 3 * 2 = 9(n^2) code paths you have to create and maintain.But you can be smart about it and when there's a change from, let's say, state 3 to state 1, instead of going there directly, you'll go state 3 -> state 0 -> state 1. In other words, when there's a state change you destroy the view for
state 3and start building the view for state 1 from 0. Now you have all possible state changes covered with just3(n) code paths.Unfortunately, destroying and rebuilding the DOM tree at every state change isn't a good idea because DOM is stateful and isn't optimized to work in this manner.
That's where React comes in. With React you need to create a component that can generate a description of a view for any given state and React will take care to efficiently update the DOM from arbitrary state x to arbitrary state y by comparing the descriptions for the view for both states that your component generated.
So, once again: it's not your component code that's declarative in React - it's the code that your component generates. In other words, React is declarative in the sense that React takes declarative description of the view (that your imperative component generates) and instructs DOM to render or update the view so that it matches the description. Reconciliation process can be thought of as a translation layer that takes declarative code and returns imperative instructions required to update the DOM to the described state.
Declarative approach is a must when you want to build a complex UI because otherwise your code would quickly became unmanageable (for just 10 states you'd have
10^2 = 100possible code paths) even though destroying the view and rebuilding from scratch is obviously slower than smooth transition from one state to the other.Technically you could destroy and rebuild the DOM on every change (and force browser vendors to optimize for that) but as you mentioned a better way is to optimize code at build time and React team is aware of that:
Engineers have proved that bumblebees can't fly! Luckily the bumblebee does not know this....
animal-dynamics.com/ad-blog/the-bu... 😛
The original statement:
… an excerpt from Wikipedia
Given that definition SQL seems like a good example of declarative programming. A statement expressed in Data Manipulation Language "declares" the nature of the data desired while the manner in which it is obtained is left entirely up to the RDBMS engine.
Now based on that example writing components containing imperative code to generate SQL statements (DML) is not declarative programming.
From the React landing page:
So "declarative approach to building UIs" is double speak for "I can't be bothered dealing closely with the actual DOM - ever."
I wouldn't hold my breath - Compiler Optimization Umbrella #9223.
Also Scheduling:
i.e. the design philosophy prefers run time control over compile time commitment.
A virtual DOM has still some benefits. It's easier to mutate than actual DOM and the abstraction allows for some sophisticated dev tools.
However, when it comes to performance, the overhead of a vDOM becomes problematic in terms of scalability. While interference between reconciler and layout engine are mostly taken care of by batching the changes, if the reconciliation takes longer than a render frame, things become jerky fast.
In any case, going sans vDOM as well as using it is no silver bullet, because the bottle necks are usually more in the reactive flow, which is why there are state management libraries like immer or frameworks like Solid.js that allow for fine-grained control over it.
Ten years back the browser landscape was still very inhomogenous. Creating Apps that could run on every computer needed tools like babel or forced to use a very limited set of commands that where common sense. And, Javascript was still a bit slow.
About 5 years ago (maybe a bit earlier), companies managed to syncrhonize their development, so today we can be pretty sure that Javascript will run on every current browser. There are still some minor differences, but the common sense is pretty broad.
Browser engines gained a lot of speed and possiblilties with HTML5, so the brower serves pretty well as a programming platform for all kinds of tools.
Javascript developed a lot from a simple scripting language to a full features programming tools. In parallel it is very fast today so it can compeed with compiled code.
I´m not sure that 5 years ago it was a good idea to create native JS apps. Maybe it was possible, but still not easy. Today, things are different. We can use JS to write native apps and can be pretty sure they will run on more or less any machine.
I started the journey with my own framework DML recently, and the results are pretty amazing without any virtual DOM and bulky toolchain. The project is still in it´s baby shoes, but develops very nicely. So, I´m pretty sure that VDom is pure overhead. But as we have learned, a VDom is necessary for declarative code only. As DML relies on the OO paradigm, it plays very smooth on the real DOM.
During that time period and to this day people are chasing "emulating the native experience" in the browser.
But even back in 2015 PPK wrote Web vs. native: let’s concede defeat:
And more recently A clean start for the web (also Second-guessing the modern web, If not SPAs, What?) - the "application web" needs to stop chasing the native experience and devise leaner ways of successfully working with the most hostile development platform in the world.
As it happens, I have experience in both.
Frankly, no matter what it is, I can optimize it to the mechanism, but the principle is different.
The main work of vdom is to create a data structure, and then make an algorithm based on the data structure.
We can't change the DOM tree, we can only reuse it as much as possible.
This is the difference between the two, but it is undeniable that they can both fastest.
After a some research and discussion I like to add another finding to the discussion, that might be enlighting.
I was ever wondering, what a VDom is good for, because I was using an Object Oriented Approach. Objects have a state, so it is easy to keep track of the state of an DOM element. The DOM itself is also stateful, so Objects and DOM work seamless together. As long a s an object know the reference of an DOM element, it does not even keep track of the element state, it can just read the element state directly.
Things are different with a functional approach. As functions are stateless (or should be stateless in FP), it is much harder to keep track of state changes. So, you need a complex system that manages states and state transistions.
Bottomline, Using an OOP-approach makes it much easier to access the DOM, while FP and the DOM are not a really good team. Nevertheless, there might be good reasons to use Functional Programming. But then you will need a VDom to manage your state changes efficiently!
There is a typo
cpableThx!
That is really an impressiv presentation of Sindre Osen Aarsaether, that just gets it to the point. Really amazing!
I'm guessing VDOM is still faster, but it isn't noticeable enough to use it (I once did a benchmark and I found that VDOM is ~0.00001612903ms faster).
As VDom adds some operations before an action is performed in the DOM, it cannot be faster - except you can save some unnecessary operation. So any benchmark depends much on your task.
But we see today, that modern browsers can remove unnecessary operations as well, so the questions is: Who performs better.