In this article, we try to understand the approach to optimising React applications. We'll work with a real example to explore some of the most common issues that developers face. Then, we'll go through the process of improving our application to tackle those issues effectively. We'll also learn how to make informed decisions and strike the right balance between different possible options.
Note: This is a lengthy article. However, you may already be familiar with some of the topics we discuss. Therefore, I encourage you to selectively read the parts that are relevant to you.
Table of contents
- Table of contents
- Introduction
- Performance challenges with React
- Understanding with a real example
- No optimisation
- Add memoization
- Reduce renders
- Reduce computation
- Optimise Updates
- Optimise Memory
- Summing up
Introduction
React has become increasingly popular in recent years for building user applications due to its introduction of declarative programming. Previously, developers had to write code to update different parts of the UI whenever there was a change. This process was difficult to manage at scale and often resulted in bugs as the application evolved. React changed that by defining the UI as a function of state. Now, developers only need to focus on updating the state correctly, and React takes care of updating the UI.
React is a good tool for building correct UIs quickly. However, the next problem down the line is performance.
Performance challenges with React
React is not without its limitations. Over the years, it has become increasingly clear that it is difficult to create performant React applications. Let us go over how React updates UI to understand where the problem lies.
Scheduling an update
React maintains a tree of your components. These components can have some state. Developers can request an update to the state using setState
. This instructs React to schedule an update for the component's state. React performs render and commit to process the update for the subtree under this component.
Render phase
The component is re-evaluated using the new updated state. For function components, this means invoking the function again. React recursively re-evaluates all the children of the component being updated. In the end, React has the output of all components in the tree. React requires this to be done using pure functions which means if rendering is done multiple times with the same state, output should be the same. The output should only depend on the input and no other external factor. Components can describe any side effects that need to run separately using useEffect.
Commit phase
The output of the render phase is an updated representation of DOM under the component subtree (often called Virtual DOM). React now has the outdated DOM for the subtree along with the updated DOM. It has a O(n)
algorithm to patch the outdated DOM and update it.
Where is the issue?
The commit phase is very straightforward. Under the imperative paradigm, developers would manually update the DOM which resulted in inefficient updates. React replaced that by carefully only updating what is needed in one go.
The render phase is usually more computationally expensive since the entire subtree is calculated again. That in itself is not a big problem for a single update. However, when you have multiple updates occurring rapidly, it becomes a real problem. The React team has acknowledged that and has started working towards mitigating it.
Imagine you have a parent component with 3 state variables - s1
, s2
and s3
. It also has 3 children which rely on one of these variables.
Even if only s1
is changed, the parent and all the children are re-rendered instead of just parent and Child 1. This is fine since the rendering process is pure and re-rendering other children will return the same result. However, when a lot of updates are made to s1
, s2
and s3
rapidly, the performance overhead becomes evident. This is often the case with highly interactive apps (like chat, video conferencing etc).
React has introduced measures like batching state updates, background concurrent rendering and memoization to tackle this. My opinion is that the best way to solve the problem is by improving their reactivity model. The app needs to be able to track the code that should be re-run on updating a given state variable and specifically update the UI corresponding to this update. Tools like solid.js and svelte work in this manner. It also eliminates the need for a virtual DOM and diffing.
How do we optimise?
Software optimisation is usually not a simple process. Each change has trade-offs associated with it. Some trade-offs might be acceptable for your use case while others might not be acceptable. This makes optimisation a very specific process. In general, it is repeated iterations of the following steps:
- Observe the application under stress.
- Find which parts of the app run into performance issues.
- Debug the root cause of these performance issues.
- Come up with approaches to mitigate those.
- Compare trade-offs between these approaches and select one most suitable to your use case.
Understanding with a real example
Let us consider a stock monitoring application. This mock app monitors rapidly updating stock prices in real-time. It is deployed here. For the best experience, you should clone and run the app from the source from here.
This app has the following sections:
Header with navigation to different versions of stock monitoring
Buttons to observe/unobserve the stocks and reset the stock data
A list of stock price events - contains the name of the stock and the current price
A profiler widget which:
Shows how many renders take place every second and the average render & commit durations of those updates.
Can plot line chart for the number of renders, average render duration and total time spent calculating per second throughout profiling.
Total time = (Render + Commit) * Render Count
To keep things consistent, I will profile each version of the app for 1 minute. Stock events are fired at every 200ms
. We will observe how the app behaves and iterate over the code to make it perform better.
Monitoring the application
The app already has a profiler widget built on the Profiler API. The development build of the app is deployed so that monitoring can be done with ease.
In addition, you can use React developer tools to see component trees and profile them. This will provide a breakup of why each re-render took place and how much time each component took to re-render.
Chrome also has a built-in performance monitor which helps you see the overall behaviour of the application outside React's context
No optimisation
Let us observe the app without any optimisation. To see the performance fluctuations better, you can add an artificial CPU slowdown. I will be working with a 6x
slowdown.
Go ahead and click on observe. Remember to start profiling before you start observing. When you want to stop, click on Unobserve
.
Profiling
Let us look at the plots generated by the profiling widget and understand the trends:
T = 0 - 18s
- Number of renders starts with
10-11
renders per second. It stays at that for a while. - Render duration increases linearly from
0
to50ms
- Total time spent increases linearly from
0
to500ms
.
These trends show that with time, the load on the JS thread increases. Render duration increases which also increases the total time spent. Also, 500ms = 10 x 50ms
which means that the JS thread can process all the renders so far.
T = 18s - 33s
- The number of renders fall rapidly from
10
to5
- Render duration keeps on increasing almost linearly to
110ms
- Total time spent oscillates between
500ms
to650ms
.
If you look closely, these trends are due to the rendering duration increasing over time. This blocks the JS
thread for more time and leaves less available time for other renders per second. Total time spent shows the correlation between decreasing render count and increasing render duration. When JS
thread throttles and reduces the number of renders processed, the total time spent is lesser. However, since the average render duration also increases, the total time spent rendering increases again.
T = 33s - 60s
- Number of renders continues to fall. It falls from
5
to an average of3-4
renders towards the end. The decrease is not as sharp as before - Render duration still keeps on increasing. However, now the growth has slowed down. This is why the render count did not fall as sharply as before.
- Total time keeps on oscillating. The average is still
500-600ms
with large fluctuations on either side.
This is simply because the JS thread is close to reaching the throughput limit and hence, the number of renders processed is reduced. Growth in calculations becomes slow. The app becomes laggy and less responsive at this point.
Here is a screenshot of the performance analysis using Chrome Inspector.
If you look at the top, there are a lot of lagging frames. These updates are displayed but they lag behind the main thread. Those are coloured yellow and labelled as partially presented frames. JS thread is busy for 92%
of the observation time.
Now let us take a look at the React profiler under Chrome Inspector.
This further tells us two things:
- The
Table
component is the most expensive to render - During the renders, hooks
1
and2
of theUnoptimised
component change alternatively.
Adding optimisation
We are familiar with how the app behaves. The table component is the bottleneck. We can start by reducing the time spent rendering it. Let us look at the unoptimised version of the app (src/routes/unoptimised/unoptimised.tsx
).
These are the hooks that React Profiler reported:
-
stockEventList
is used by the table. -
averagePrices
is used by the line chart plot.
These are updated alternatively due to useEffect on line 35
. Since they are used to maintain separate parts of the UI, we can add memoization on the children to ensure they only re-render the component they are consumed by. So updating averagePrices
should not re-render the table and updating stockEventList
should not re-render the line chart.
Trade-off
This change will reduce the render durations. Memoization will increase the memory usage. React docs say that it won't be harmful.
It will also introduce refactoring. As a developer, you try to make minimal changes to the codebase. This ensures that existing logic does not break. However, with React, you'd often be required to prune and refactor big components into smaller ones. This is because your functions map directly to a part of the UI. Solid.js takes an interesting approach to components as simply a means to modularity and reusability. Breaking into components does not impact how much code runs on updates.
Add memoization
The final code can be seen under (src/routes/add-memo/add-memo.tsx
). Here is a list of changes made:
- The table and line chart components are wrapped in
useMemo
to ensure that they are not re-rendered when non-relevant changes cause re-rendering of the application. - Moved inline constants outside the function component. This ensures that we are not creating new but equivalent objects on re-render as these might cause re-render of the children.
React is planning on moving to a new compiler which can do some of this for you. But right now, this responsibility falls on the developer.
Profiling
Let us look at the profiling plots again:
There are some interesting trends this time:
- The average render duration is reduced. It has become half from a maximum of
180ms
to90ms
. This is because the table is not re-rendered on every alternate render. - This is reflected in the JS thread being able to process renders without any issues up to
39s
which is a huge improvement over the previous18s
. - Render count is sustained at
10
up to39s
and then starts to fall. - Total time spent increases up to
39s
and then fluctuates around the previous range.
Based on these and the insights from the unoptimised version, we understand that the JS thread starts struggling when the total time spent reaches 500-600ms
. In the long run, this version of the app will also run into the same issues as before. It will take longer but the app will ultimately become less responsive.
Now let us look at the React Profiler.
The change in trend is because every alternate render is smaller. However, the render duration for the table keeps on increasing.
Adding optimisation
If stock events are being sent every 200ms
, there should be 5
renders every second. However, we see that the renders (when the app is capable of handling the load) are twice that number. Let us focus on reducing the number of renders further here. If you look at the code (src/routes/add-memo/add-memo.tsx
), you'll see that there is a useEffect
on line 51
.
This useEffect
runs when the list of stock events is changed. It calculates the average of the last 50 entries and also scrolls the table to the bottom. However, useEffect
is meant for synchronising with external systems. This is important due to the timing of triggering useEffect
. It runs after rendering, in a deferred event. If it runs React-related code, there might be a better place to put that code. This is one of the many magical rules that developers must remember and is not strictly enforced by React's design.
In previous examples, stockEventList
is updated causing a re-render. This triggers the useEffect
which updates averagePrices and causes another re-render. We can put the code under useEffect to the observer for stock events. Since multiple state updates are being made together, they will be batched into a single update.
Trade-off
This change will again introduce refactoring. Many problems with React code happen because components aren't updated regularly. They can grow too big and confusing, making them hard to manage. Components can be dynamic with useState
and useEffect
hooks and it can become easy to lose track of which code inside the component runs when. Some code can run again and again even when not needed due to multiple updates to the component's state. If that code is run due to an unrelated state update, it causes a functionality bug. It defeats React's purpose of creating correct applications. Developers need to trace the code and describe dependencies for changes. React provides helpful rules which can be enforced by IDEs
and linting
.
Therefore, you should always try to refactor React code while updating it to ensure that only the correct code runs for a minimal number of times. Currently, this responsibility lies with the developer and introduces extra complexity while writing code.
Reduce renders
To batch the state updates together, we move the code from useEffect
into the observer logic. The result can be found here (src/routes/reduce-renders/reduce-renders.tsx
)
Another change you might notice is useStableCallback
hook. It is a hook that creates a function that does not change in reference but always runs the latest code when invoked. It is inspired by class components. React is planning on bringing something similar with useEffectEvent.
Profiling
Both state variables are now updated together in a single render. This is because React has now batched these updates together.
Let us look at the profiling trends again:
The app behaves similarly to the previous version. Here are the trends:
- The number of renders is lower (
5
when the app can process all the updates) which means there are no extra renders. - The average render duration is back to a maximum of
180ms
. - This is because the number of renders is halved but each render is almost twice as expensive. In the previous version, every alternate render was very small. However this time, every update causes the table to re-render. Therefore calculations on every render are the same as the unoptimised version.
- Total time spent reaches the
500ms-600ms
range at around the same time mark as before (39s
).
Adding optimisation
We have reduced the number of renders. The app is better than before but it still becomes slower and unresponsive. The root cause is the increasing duration of renders. If you look at the React profiler, you'd realise this is because of the table component
The table renders all the rows. On every update, all the rows are re-rendered. As the number of entries increases, the amount of time spent re-rendering all the rows will also increase. The average price plot does not show this trend since we only show running the average of the last 50 entries. It internally uses Canvas which is very performant under rapid updates.
If we limit the number of rows rendered to only show the visible items, we can limit the cost of re-rendering the table. Even if there are a large number of rows in the table, only the ones on the screen will be rendered. That can be done by virtualization. I will use react-virtuoso for the app.
Trade-off
This change will make sure that the app doesn't slow down and become unresponsive. However, with virtualization, we sacrifice the smoothness of the scroll. Scrolling rapidly will result in blank flashes inside the table component. This will also happen when the list scrolls to the bottom on new stock events. We also don't render the rows out of view which can hide them from the browser search functionality.
Reduce computation
The next version of the app can be found under src/routes/reduce-computation/components/reduce-computation.tsx
.
Here are the changes made:
- The code for the chart and the table is broken into smaller components. They can be separately memoized using
memo
. - Also logic for averages has now moved into the
Chart
component. This prevents the table from being re-rendered if there are updates from theChart
component. - We have also eliminated
averagePrices
state and simplified calculations for calculating averages. Earlier we looped over the last50
entries of the list. Now we create new entries from previous averages. - Inside the table component, we have replaced
NextTable
withTableVirtuoso
Profiling
Here are the trends:
- The average render duration is much lower. It is between
0.8ms-1.5ms
on average. It does not increase. There are fluctuations which look significant. However, these are just small variations which are magnified due to the small scale of render durations. - Total time spent is also limited to
18ms-33ms
. It does not increase with time. - The number of renders is constant but much higher with a value of
18-20
.
To understand why the number of renders is higher, we can check the React profiler.
Hook 2
causes extra re-renders for the Virtuoso Table
component. This hook is responsible for tracking which rows are in the view. The component works by showing the visible rows on the screen and adding space above and below them to account for the sizing of the list. We scroll to end on every update which causes re-renders inside the Virtuoso Table
component.
Here is what the performance analysis looks like:
JS thread is occupied for 13.8%
of the observation time which is much lower than the previous 92%
. There are no dropped frames and the app stays responsive in the longer run.
Adding optimisation
The app still feels unusable. The updates are too rapid for any meaningful interaction. It would be better if we could process multiple stock events together are larger intervals. It would also reduce the load on the JS thread. Also scrolling to the bottom seems to be adding more problems than value to the app. It is very hard to focus on a list that keeps scrolling. It causes extra re-renders in the virtual table and also causes blank flashes while the list scrolls. It can be eliminated if the table shows the list in reverse order, with new entries added to the top. This is something you should discuss with your UX
team.
Trade-off
Slowing down the rate of updating UI by handling multiple updates together will provide a better UX
and performance. However, adding multiple events together will also introduce larger visual shifts. It also makes state management complex. There is another overhead of reversing the list while rendering the table. But with slower updates, this should not be significant.
Optimise Updates
At this point, it is a good step to introduce a state management library. React provides context and reducers for complex state management. Using them, you can expose your state everywhere in the application and update it as needed. But they have certain limitations:
- Context can expose multiple values combined in an object. Even if one of those values changes, the context will cause a re-render everywhere it is being consumed.
- You have no control over how the updates are triggered. If there is a re-render in the context provider which changes the context value, then all consumers will re-render.
- The developer needs to do most of the heavy lifting around state management logic
Introducing a state management library can help make this process smoother. Here's how:
- You can precisely specify the data from the entire state store in your consumer component. The consumer will only re-render when that specific value is changed. So you have fine-grained control over the scope of re-renders on updates.
- You can use transient updates. In this case, you can subscribe to a value from the store in a component but you can choose when updates should cause a re-render. This is very useful in applications where there are a lot of rapid updates to the state. It is possible because external stores usually operate outside React's scope and you can control how the integration works with your components.
- State management libraries provide a systematic way of maintaining your state-related logic. This is useful when you're working in a large team.
- Most of the libraries come with dedicated developer tools. They also come with a plethora of community-maintained plugins which can make complex state manipulation very easy
For our case, points 1
and 2
can help us provide a better way of handling updates in our rapidly updating application. I have used zustand for the application.
Understanding how the store is structured
The code for the store can be found at src/routes/optimise-updates/store/stock-store.ts
. Here is how it works:
- The store maintains a list of Stock Event IDs. This list can be read in the table component.
- A record of Stock events indexed by their ID. Individual table rows can read entries from this record. This is also useful where entries can also be updated. In this case, only the relevant row component will re-render. In the previous version, the table component accepted a list of event objects as
prop
. Changing a single entry would've caused a re-render of the table component.
We also have a custom hook which leverages transient updates. We throttle the updates instead of immediately re-rendering the component on state change. The code can be found under src/routes/optimise-updates/hooks/use-throttled-store.ts
.
Using this hook will make sure that updates are throttled to the specified interval. The default value is 600ms
which means 3
stock events related updates will trigger 1
re-render.
Now we can use this in the chart
and table
component.
Note that list of IDs is reversed before rendering. This eliminates scrolling to bottom on every event. If needed, reversed list can also be maintained in the store.
We read the stock event data in StockTableRow
component. That doesn't need throttling. This is because it gets the ID from the table
component which already gets throttled updates.
The final code can be found at src/routes/optimise-updates/components/optimise-updates.tsx
.
Profiling
Initially, there was a spike in render count and total time spent. That is due to the virtual table calculating layout as entries cause overflow. However, that is not seen once the scroll has been shown in the table. To avoid the spike, can rely on the fact that all the rows are of the same height. We can ask virtuoso to skip this calculation by providing the size in props.
After the initial spike:
- Render count stays between
1-3
- Render duration stays between
0.1-0.7ms
- Total time spent stays between
0.5-2ms
.
This is a huge optimisation over the previous version where the total time spent maxed to 33ms
. Performance monitor further shows that UI is less rapidly updated now. JS thread only spends 5.38%
time on rendering effort.
Adding optimisation
Now the application seems to be stable. However, if you think about the store, it will grow indefinitely in the long run. That will cause memory usage to grow. We can remove old entries from the store to free up memory. In a real-world use case, you'd refetch those entries when needed again. This will make sure that memory does not increase unbounded.
Trade-off
This will add extra complexity to state updates. Processing updates where older entries are deleted might take longer
Optimise Memory
The updated store can be found here: src/routes/optimise-memory/store/stock-store.ts
. After the size of the ID list reaches 200
, we remove the first 100
entries from the store. So only a maximum of 200
entries is maintained.
Profiling
Updated code can be found under src/routes/optimise-memory/components/optimise-memory.tsx
. Here is the performance analysis before the memory optimisation:
Here is the performance profile after optimisation:
The memory usage doesn't go down by much. Earlier it was in the range of 11.0-19.4 MB
. Now it is in the range of 11.2-19.3 MB
. This is probably because the event objects are too small and don't take much memory. It is also concerning to see a range of dropped frames in between. These are due to the store being cleared which blocks the JS thread.
What went wrong?
This is an example of over-optimisation. We could save some memory usage but the cost of computation was higher. The version without memory usage optimisation is good enough since there isn't a practical use case where someone would monitor the stocks for so long that their memory is filled up.
But if we want to ensure that their memory does not increase unbounded, we can experiment with the point at which the store is cleared and the number of entries removed at once. Tweaking these parameters should get us into a zone of acceptable performance.
Summing up
Optimisation is a tricky process. You need to observe your application, correctly debug the bottlenecks and mitigate those with good design. There are added complications due to React's declarative model since you cannot completely control which code runs when. Depending on your use case, there might be other optimisations possible. Some of the common ones are:
- Bundle Splitting and Lazy Loading: Splitting your built application into smaller chunks that can be lazily loaded. These chunks are only sent to the client application if they are needed. It helps reduce the amount of code loaded on the client for your application to work.
- Server Side Rendering: This is a hybrid of the older server rendered applications and newer single-page applications. You can selectively render your components on the server and serve them to the client readily. That helps increase security, abstract away logic from the client and provides better load times & search engine visibility.
-
Web Workers and Web Assembly: For applications that are computation-heavy, you can offload the work to a separate thread. These are called web workers. You can also run compiled
C++
(or other supported languages) code via Web assembly which gives you the benefit of high performance in the browser.
Did I miss something? Can I make this article better? Is there something that you liked reading? Let me know in the comments!
Credits
Cover photo by RealToughCandy[dot]com: https://www.pexels.com/photo/hand-holding-react-sticker-11035471/
Top comments (4)
Great work. You put a lot of effort into making this tutorial. I learned a lot.
Thank you : )
I’m glad you enjoyed it!!
Amazing work. Really helpful!
Thank you!!