DEV Community

Cover image for Implement React v18 from Scratch Using WASM and Rust - [13] Implement Lane and Batch Update
ayou
ayou

Posted on

Implement React v18 from Scratch Using WASM and Rust - [13] Implement Lane and Batch Update

Based on big-react,I am going to implement React v18 core features from scratch using WASM and Rust.

Code Repository:https://github.com/ParadeTo/big-react-wasm

The tag related to this article:v13

We know that starting from v17, React has been using the Lane model to replace the previous Expiration Time model. Why was this change made? Let's first take a look at how the former Expiration Time model worked.

Each time an update is triggered, an update data structure is created, which has an expirationTime field to represent its priority. As the name implies, expirationTimes means the time after which the update is considered stale. According to our convention, the smaller the value, the sooner it expires, and the higher the priority should be. However, its value is not simply determined by adding a constant to the current time; it is the result of a series of algorithms, which ultimately mean that a larger value indicates a higher priority. A Fiber Tree's FiberNode may have multiple updates.

Every time React schedules an update, it selects an expirationTime from all the update.expirationTimes in all the FiberNode nodes to be the renderExpirationTime for this update. If the update.expirationTime in a FiberNode is less than the renderExpirationTime, then that update will be skipped:

// https://github.com/facebook/react/blob/v16.10.0/packages/react-reconciler/src/ReactUpdateQueue.js#L516-L518
const updateExpirationTime = update.expirationTime;
    if (updateExpirationTime < renderExpirationTime) {
      // This update does not have sufficient priority. Skip it.
Enter fullscreen mode Exit fullscreen mode

It can be seen that when using the Expiration Time model, if we want to determine whether a current update task is included in a certain batch of updates, we can compare their priorities like this:

const isTaskIncludedInBatch = priorityOfTask >= priorityOfBatch
Enter fullscreen mode Exit fullscreen mode

The result of this method is that tasks with lower priority cannot be processed individually. For example, given priorities A > B > C, you cannot process B without handling A; similarly, you cannot handle C without also handling B and A at the same time.

Before the introduction of Suspense, this was reasonable. However, when you introduce IO tasks (i.e., Suspense), you might encounter a situation where a high-priority IO task prevents a low-priority CPU task from executing, while in reality, we would prefer the low-priority CPU task to complete first.

Consider the following code:

const App = () => {
  const [count, setCount] = useState(0)

  useEffect(() => {
    const t = setInterval(() => {
      setCount((count) => count + 1)
    }, 1000)
    return () => clearInterval(t)
  }, [])

  return (
    <>
      <Suspense fallback={<div>loading...</div>}>
        <Comp />
      </Suspense>
      <div>count is {count}</div>
    </>
  )
}
Enter fullscreen mode Exit fullscreen mode

Within this scenario:

  • Comp initiates an asynchronous request (assuming it takes 2 seconds to return), and it will continuously show loading until the request returns.
  • The count is incremented by one every second.

The observed effect should be as follows:

<div>loading...</div>
<div>count is 0</div>
=>
<div>loading...</div>
<div>count is 1</div>
=>
<div>loading...</div>
<div>count is 2</div>
=>
<div>I am comp, request successfully</div>
<div>count is 3</div>
Enter fullscreen mode Exit fullscreen mode

However, if we follow the rules of the Expiration Time model, since the high-priority IO update corresponding to Suspense will block the low-priority CPU update, the content observed would be as follows:

<div>loading...</div>
<div>count is 0</div>
=>
<div>loading...</div>
<div>count is 0</div>
=>
<div>loading...</div>
<div>count is 0</div>
=>
<div>I am comp, request successfully</div>
<div>count is 3</div>
Enter fullscreen mode Exit fullscreen mode

Another flaw of the Expiration Time model is its limitation in representing multiple groups of priorities. Using a Set is impractical both in terms of memory and computation, because the calculations to be handled are very common, so they need to be as fast as possible and use as little memory as possible.

As a compromise, what we would typically do is maintain a range of priorities:

const isTaskIncludedInBatch =
  taskPriority <= highestPriorityInRange &&
  taskPriority >= lowestPriorityInRange
Enter fullscreen mode Exit fullscreen mode

If there are multiple discontinuous ranges, then the code becomes quite cumbersome to write.

However, with the Lane model, the calculation becomes very simple:

const isTaskIncludedInBatch = (task & batchOfTasks) !== 0
Enter fullscreen mode Exit fullscreen mode

Having briefly introduced the Lane model, let's look at how it is implemented specifically.

The goal this time is to implement Batch Update, where, as in the following example, multiple updates will only trigger a single render:

function App() {
  const [num, updateNum] = useState(0)

  return (
    <ul
      onClick={(e) => {
        updateNum((num: number) => num + 1)
        updateNum((num: number) => num + 2)
        updateNum((num: number) => num + 3)
        updateNum((num: number) => num + 4)
      }}>
      num值为{num}
    </ul>
  )
}
Enter fullscreen mode Exit fullscreen mode

Firstly, we define the Lanes and the related handling functions; currently, there are only two types of Lanes:

use bitflags::bitflags;

bitflags! {
    #[derive(Debug, Clone, PartialEq, Eq)]
    pub struct Lane: u8 {
        const NoLane = 0b0000000000000000000000000000000;
        const SyncLane = 0b0000000000000000000000000000001;
    }
}

pub fn get_highest_priority(lanes: Lane) -> Lane {
    let lanes = lanes.bits();
    let highest_priority = lanes & (lanes.wrapping_neg());
    Lane::from_bits_truncate(highest_priority)
}

pub fn merge_lanes(lane_a: Lane, lane_b: Lane) -> Lane {
    lane_a | lane_b
}

pub fn request_update_lane() -> Lane {
    Lane::SyncLane
}
Enter fullscreen mode Exit fullscreen mode

Whenever an update is triggered on a FiberNode, the update's Lane is bubbled up to the root node root and merged with root.pendingLanes:

pub fn mark_root_updated(&mut self, lane: Lane) {
    self.pending_lanes = merge_lanes(self.pending_lanes.clone(), lane)
}
Enter fullscreen mode Exit fullscreen mode

Subsequently, the root node selects the highest priority Lane and begins a round of the rendering process:

fn ensure_root_is_scheduled(root: Rc<RefCell<FiberRootNode>>) {
    let root_cloned = root.clone();
    let update_lane = get_highest_priority(root.borrow().pending_lanes.clone());
    if update_lane == Lane::NoLane {
        return;
    }
    schedule_sync_callback(Box::new(move || {
        perform_sync_work_on_root(root_cloned.clone(), update_lane.clone());
    }));
    unsafe {
        HOST_CONFIG.as_ref().unwrap()
            .schedule_microtask(Box::new(|| flush_sync_callbacks()));
    }
}
Enter fullscreen mode Exit fullscreen mode

Instead of directly executing perform_sync_work_on_root, the task is placed into a queue through a closure, and then processed together in the next macro-task or micro-task:

// packages/react-reconciler/src/sync_task_queue.rs
static mut SYNC_QUEUE: Vec<Box<dyn FnMut()>> = vec![];
static mut IS_FLUSHING_SYNC_QUEUE: bool = false;

pub fn schedule_sync_callback(callback: Box<dyn FnMut()>) {
    unsafe { SYNC_QUEUE.push(callback) }
}

pub fn flush_sync_callbacks() {
    unsafe {
        if !IS_FLUSHING_SYNC_QUEUE && !SYNC_QUEUE.is_empty() {
            IS_FLUSHING_SYNC_QUEUE = true;
            for callback in SYNC_QUEUE.iter_mut() {
                callback();
            }
            SYNC_QUEUE = vec![];
            IS_FLUSHING_SYNC_QUEUE = false;
        }
    }
}

// packages/react-reconciler/src/work_loop.rs
fn ensure_root_is_scheduled(root: Rc<RefCell<FiberRootNode>>) {
  ...
  schedule_sync_callback(Box::new(move || {
      perform_sync_work_on_root(root_cloned.clone(), update_lane.clone());
  }));
  unsafe {
      HOST_CONFIG.as_ref().unwrap()
          .schedule_microtask(Box::new(|| flush_sync_callbacks()));
  }
}

Enter fullscreen mode Exit fullscreen mode

In schedule_microtask, we choose the appropriate macro-task or micro-task API in the order of queueMicrotask -> Promise -> setTimeout.

When executing perform_sync_work_on_root, rendering is carried out according to the priority of update_lane. After the commit is complete, update_lane is removed from pending_lanes:

pub fn mark_root_finished(&mut self, lane: Lane) {
    self.pending_lanes &= !lane;
}
Enter fullscreen mode Exit fullscreen mode

In this way, when perform_sync_work_on_root is executed again, since the highest priority obtained is NoLane, the subsequent process will not continue.

For more details on this update, see here.

Please kindly give me a star!

Top comments (0)