DEV Community

Cover image for Super Charging Fine-Grained Reactive Performance

Posted on • Updated on

Super Charging Fine-Grained Reactive Performance

What's a Reactive Library?

Reactivity is the future of JS frameworks! Reactivity allows you to write lazy variables that are efficiently cached and updated, making it easier to write clean and fast code.

I've been working on a new fine grained reactivity libary called Reactively inspired by my work on the SolidJS team. Reactively is currently the fastest reactive library in its category. You can use Reactively on its own right now, and in the future the ideas from Reactively will help SolidJS become even faster.

Let's explore some of the different algorithmic approaches to fine grained reactivity, I'll introduce you to my new reactivity library, and we'll benchmark three of the libraries to compare.

Introducing Reactively

Fine-grained reactivity libraries have been growing in popularity recently. Examples include new libraries like Preact Signals, µsignal, and now Reactively, as well as longer-standing libraries like Solid, S.js, and CellX. Using these libraries, programmers can make individual variables and functions reactive. Reactive functions run automatically, and re-run 'in reaction' to changes in their sources.

With a library like Reactively, you can easily add lazy variables, caching, and incremental recalculation to your typescript/javascript programs. Reactively is tiny (<1 kb) and has a simple API. Hopefully, Reactively makes it easy for you to explore the benefits of reactive programming.

Here's an example of using Reactively for a lazy variable:

import { reactive } from "@reactively/core";

const nthUser = reactive(10);

// fetch call is deferred until needed
const lazyData = reactive(() =>

if (needUsers) {
  useBuffer(await lazyData.value);
Enter fullscreen mode Exit fullscreen mode

Reactive libraries work by maintaining a graph of dependencies between reactive elements. Modern libraries find these dependencies automatically, so there's little work for the programmer beyond simply labeling reactive elements. The library's job is to efficiently figure out which reactive functions to run in responses to changes elsewhere in the graph. In this example, our dependency graph is quite simple:

Reactivity libraries are at the heart of modern web component frameworks like Solid, Qwik, Vue, and Svelte. And in some cases you can add fine-grained reactive state management to other libraries like Lit and React. Reactively comes with a decorator for adding reactive properties to any class, as well as prototype integration with Lit. Preact Signals comes with a prototype integration with React. Expect more integrations as these reactivity cores mature.

Goals of a reactive library

The goal of a reactive library is to run reactive functions when their sources have changed.

Additionally, a reactive library should also be:

  • Efficient: Never overexecute reactive elements (if their sources haven't changed, don't rerun)
  • Glitch free: Never allow user code to see intermediate state where only some reactive elements have updated (by the time you run a reactive element, every source should be updated)

Lazy vs. Eager Evaluation

Reactive libraries can be divided into two categories: lazy and eager.

In an eager reactive library, reactive elements are evaluated as soon as one of their sources changes. (In practice, most eager libraries defer and batch evaluations for performance reasons).

In a lazy reactive library, reactive elements are only evaluated when they are needed. (In practice, most lazy libraries also have an eager phase for performance reasons).

We can compare how a lazy vs eager library will evaluate a graph like this:

Lazy Eager
A lazy library will recognize that the user is asking to update D, and will first ask B then C to update, then update D after the B and C updates have been completed An eager library will see that A has changed, then tell B to update, then C to update and then C will tell D to update

Reactive Algorithms

The charts below consider how a change being made to A updates elements that depend on A.

Let's consider two core challenges that each algorithm needs to address. The first challenge is what we call the diamond problem, which can be an issue for eager reactive algorithms. The challege is to not accidentally evaluate A,B,D,C and then evaluate D a second time because C has updated.
Evaluating D twice is inefficient and may cause a user visible glitch.

The second challenge is the equality check problem, which can be an issue for lazy reactive algorithms. If some node B returns the same value as the last time we called it, then the node below it C doesn't need to update. (But a naive lazy algorithm might immediately try to update C instead of checking if B has updated first)

To make this more concrete, consider the following code: it is clear that C should only ever be run once because every time A changes, B will reevaluate and return the same value, so none of C's sources have changed.

const A = reactive(3);
const B = reactive(() => A.value * 0); // always 0
const C = reactive(() => B.value + 1);
Enter fullscreen mode Exit fullscreen mode


In a blog post a few years ago, Michael Westrate described the core algorithm behind MobX. MobX is an eager reactive library, so let's look at how the MobX algorithm solves the diamond problem.

After a change to node A, we need to update nodes B, C and D to reflect that change. It's important that we update D only once, and only after B and C have updated.

MobX uses a two pass algorithm, with both passes proceeding from A down through its observers. MobX stores a count of the number of parents that need to be updated with each reactive element.

In the diamond example below, the first pass proceeds as follows: After an update to A, MobX marks a count in B and C that they now have one parent that needs to update, and MobX continues down from B and from C incrementing the count of D once for each parent that has a non 0 update count. So when the pass ends D has a count of 2 parents above it that need to be updated.

In a second pass, we update every node that has a zero count, and subtract one from each of its children, then tell them to reevaluate if they have a zero count and repeat.

Then, B and C are updated, and D notices that both of its parents have updated.

Now finally D is updated.

This quickly solves the diamond problem by separating the execution of the graph into two phases, one to increase a number of updates and another to update and decrease the number of updates left.

To solve the equality check problem MobX just stores an additional field that tells each node whether any of its parents have changed value when they updated

An implementation of this might look like the following (code by Fabio Spampinato):

set(value) {
  this.value = valueNext;

  // First recursively increment the counts
  this.stale(1, true);
  // Then recursively update the values and decrement the counts
  this.stale(-1, true);
stale(change: 1 | -1, fresh: boolean): void {
  if (!this.waiting && change < 0) return;

  if (!this.waiting && change > 0) {
    this.signal.stale(1, false);

  this.waiting += change;
  this.fresh ||= fresh;

  if (!this.waiting) {
    this.waiting = 0;

    if (this.fresh) {

    this.signal.stale(-1, false);
Enter fullscreen mode Exit fullscreen mode

Preact Signals

Preact's solution is described on their blog. Preact started with the MobX algorithm, but they switched to a lazy algorithm.

Preact also has two phases, and the first phase "notifies" down from A (we will explain this in a minute), but the second phase recursively looks up the graph from D.

Preact checks whether the parents of any signal need to be updated before updating that signal. It does this by storing a version number on each node and on each edge of the reactive dependency graph.

On a graph like the following where A has just changed but B and C have not yet seen that update we could have something like this:

Then when we fetch D and the reactive elements update we might have a graph that looks like this

Additionally, Preact stores an extra field that stores whether any of its sources have possibly updated since it has last updated. Then, it avoids walking up the entire graph from D to check if any node has a different version when nothing has changed.

We can also see how Preact solves the equality check problem:

When A updates, B will rerun, but not change its version because it will still return 0, so C will not update.


Like Preact, Reactively uses one down phase and one up phase. Instead of version numbers, Reactively uses only graph coloring.

When a node changes, we color it red (dirty) and all of its children green (check). This is the first down phase.

In the second phase (up) we ask for the value of F and follow a procedure internally called updateIfNecessary() before returning the value of F. If F is uncolored, it's value does not need to be recomputed and we're done. If we ask for the value of F and it's node is red, we know that it must be re-executed. If F's node is green, we then walk up the graph to find the first red node that we depend on. If we don't find a red node, then nothing has changed and the visited nodes are set uncolored. If we find a red node, we update the red node, and mark its direct children red.

In this example, we walk up from F to E and discover that C is red. So we update C, and mark E as red.

Then, we can update E, and mark its children red:

Now we know we must update F. F asks for the value of D and so we updateIfNecessary on D, and repeat a similar traversal this time with D and B.

And finally we get back to a fully evaluated state:

In code, the updateIfNecessary process looks like this:

/** update() if dirty, or a parent turns out to be dirty. */
updateIfNecessary() {
  // If we are potentially dirty, see if we have a parent who has actually changed value
  if (this.state === CacheCheck) {
    for (const source of this.sources) {
      source.updateIfNecessary(); // Will change this.state if source calls update()
      if (this.state === CacheDirty) {
        // Stop the loop here so we won't trigger updates on other parents unnecessarily
        // If our computation changes to no longer use some sources, we don't
        // want to update() a source we used last time, but now don't use.

  // If we were already dirty or marked dirty by the step above, update.
  if (this.state === CacheDirty) {

  // By now, we're clean
  this.state = CacheClean;
Enter fullscreen mode Exit fullscreen mode

Ryan Carniato describes a related algorithm that powers Solid in his video announcing Solid 1.5.


Current reactivity benchmarks (Solid, CellX, Maverick) are focused on creation time, and update time for a static graph. Additionally, existing benchmarks aren't very configurable, and don't test for dynamic dependencies.

We've created a new and more flexible benchmark that allows library authors to create a graph with a given number of layers of nodes and connections between each node, with a certain fraction of the graph dynamically changing sources, and record both execution time and GC time.

In early experiments with the benchmarking tool, what we've discovered so far is that Reactively is the fastest (who would've guessed 😉).

The frameworks are all plenty fast for typical applications. The charts report the number of updated reactive elements per millisecond on an M1 laptop. Typical applications will do much more work than a framework benchmark, and at these speeds the frameworks are unlikely to bottleneck overall performance. Most important is that the framework not run any user code unnecessarily.

That said, there's learning here to improve performance of all the frameworks.

  • The Solid algorithm performs best on wider graphs. Solid is consistent and stable, but generates a little garbage which limits speed under these extreme benchmark conditions.
  • The Preact Signal implementation is fast and very memory efficient. Signal works especially well on deep dependency graphs, and not as well on wide dependency graphs. The benchmarking also caught a performance bug with dynamic graphs that will undoubtedly be fixed soon.

It should be noted that there is a second part of the implementation of each framework that makes a big difference in performance: memory management & data structures. Different data structures have different characteristics for insert and delete, as well as very different cache locality in JavaScript (which can drastically change iteration times). In a future blog post we'll look at the data structures and optimizations used in each framework (such as S.js's slots optimization, or Preact's hybrid linked list nodes).

Wide graphs

Deep Graph Square Graph

Dynamic Graphs

Top comments (19)

ryansolid profile image
Ryan Carniato • Edited

Awesome work.

I like that you looked at the difference between width and depth. A lot of reactive benchmarks focus on depth, but most reactive UIs are actually wider than than they are deep from a reactive perspective. Like how far do you derive a value, 2 or 3 levels tops, where these benchmarks tend to look in the 1000s. But having the same value read in 10s or maybe 100s of places in the UI is more common.

I look forward to how we can make a system like this work with concurrent rendering.

mfp22 profile image
Mike Pearson

I'd like to see how Sloid does on these

mindplay profile image
Rasmus Schultz

Benchmark against @maverick-js/observables please. It also claims to be the fastest.

ryansolid profile image
Ryan Carniato

I don't think they are claiming that anymore. Looking at their benchmarks Preact Signals are faster than them in those depth based tests.

Also Jin added Reactively to his benchmarks too and it topped it:

Definitely should test it, but Reactively seems to holding up well.

mindplay profile image
Rasmus Schultz

It's probably not very important anyway - any of these libraries are more than fast enough. Unless you were building a highly real-time application (like a game) the difference likely wouldn't be measurable in real-world applications.

But since clearly it's a contest on metrics, claims like that should be backed up - for those people who are picking a library based on its performance. Most of us probably should pick based on merit rather than metrics. 😋

thetarnav profile image
Damian Tarnawski

From what I remember, the current solid algorithm also does coloring, to figure out priorities but then it updates all un-updated memos anyway for predictability. Is this somewhat correct?

modderme123 profile image

That's correct. The current Solid algorithm pushes all updates into a queue and then gets each item in the queue using the same coloring algorithm as Reactively

ryansolid profile image
Ryan Carniato

Solid today over queues, but it doesn't run any unnecessary computations. Basically it iterates to check the state of nodes more than it needs to as it 1. faster than sorting, 2. easier to interrupt for time slicing.

devinrhode2 profile image
Devin Rhode

Let's translate it to rust and then create a DB out of it.

davedbase profile image
David Di Biase

A Rust implementation would be interesting!

mrjjwright profile image
John Wright


johnnyjxliaw121 profile image
Johnny Liaw

Hey, thanks so much for the article! I found it extremely informative.

Curious how you found out about the internal implementation of MobX? I'm currently diving into the codebase but I'm having trouble seeing how in code how it tackles the diamond problem... I'm most likely not looking in the right area

modderme123 profile image

Thanks for the kind words! It looks like since MobX 2.5, MobX has switched to a model with lazy computeds, and currently uses a similar algorithm to Reactively & Solid. Because of this, computeds are pulled on demand instead of evaluated eagerly so to avoid evaluating A in the diamond twice, it just needs to mark it as clean/up to date after the first evaluation. The original version avoids overevaluating any element in the diamond by predicting how many times it would be reevaluated (how many parents would update), and then only updating once that many parents have already been updated.

Here's the source of where MobX used to mark dependencies as stale:

You can compare that with the modern implementation using STALE (red), POSSIBLY_STALE (green), and UP_TO_DATE (gray):

artalar profile image

Hi! The bench approach is very interesting, thank you for sharing this!
But I can not reproduce example from the readme, your package just have no "reactively" variable in the sources, looks like this is the bundling issue?
Also, the important question IMHO, is Reactively support atomicity guarantees for state consistency or could it lead to unpredictable behaviour if error occurs?

modderme123 profile image

Ah yep, looks like what might be missing from the source is

export const reactive = (x) => new Reactive(x);
Enter fullscreen mode Exit fullscreen mode

As for atomicity:
One of the really neat benefits of lazy state is that you don't need batching or atomicity within a transaction for consistency. However, it is still necessary to be able to catch errors and rollback. Currently Reactively doesn't support rolling back, but it should be possible to add in a future version.

artalar profile image

I think atomicity should be default (as React.js fails all render which cause an error) . The problem with reactive programming is that error propagation is not obvious and could lead to really bad cases, here is a simple example

firewatch profile image
jzymx50 • Edited

When will the second part of the benchmark be posted? Looking forward to comparing the data structures each framework uses!

dmitryefimenko profile image
Dmitry A. Efimenko

How come RxJS is not included? The most obvious choice for reactivity IMHO

modderme123 profile image

RxJS has a slightly different API and semantics that don't match the rest of the frameworks in the benchmark. For example, it has no concept of a computed & signal.

That being said, I would be open to a PR that adds RxJS to the benchmark (these are the files that you would need to modify It should be possible to do that pretty quickly using something like