This was an internal analysis I wrote when challenged if Kroger.com could do the same MPA speed tricks as my Marko demo, but still using React because, well, you know.1
It’s from around 2021, so some questions got answered, some predictions got proven wrong or boring-in-retrospect, and React’s official SSR story actually released. I once planned a rewrite, but I’m burnt out and can’t be arsed. (Feel free to tell me what I got wrong in the comments.)
However, I’m publishing it (with some updated hyperlinks) because this post might still be useful to someone:
- React’s streaming has interesting flaws, and the official RSC implementation ain’t exactly fast
- Low-end devices remain stubbornly stagnant 2 years later
- React indeed says it’s changing in unfamiliar ways
- The intro became more relevant than I could have possibly known when I wrote it.
Don’t believe me? Well, a WebPageTest is worth a thousand posts:
Next.js RSCs + streaming | Marko | HackerNews (control) | |
---|---|---|---|
URL | next-news-rsc.vercel.sh |
marko-hackernews. ryansolid.workers.dev |
news.ycombinator.com |
WPT link | 230717_BiDcVJ_9GN | 230717_AiDc35_A1Q | 230717_AiDcNY_A0G |
JS | 94.9 kB | 0.3 kB | 1.9 kB |
HTML | 9.5 kB | 3.8 kB | 5.8 kB |
text/x-component
|
111.1 kB | 0 kB | 0 kB |
Waterfall chart |
I’m not including timings, since unfortunately WPT seems to have stopped using real Androids, and Chrome’s CPU throttling is way too optimistic for low-end devices. However, note that browsers can’t parse text/x-component
natively2, so that parsing is blocked behind JS and the parse time is more punishing on low-end devices than usual.
I’m also not sure if I should test next-edge-demo.netlify.app/rsc instead, but its results seemed so inconsistent I wasn’t sure it’s functioning correctly.
Anyway, time for the original analysis.
Can we get React to be as fast as the Kroger Lite demo?
Probably not.
Okay smart guy, why not?
This is not going to be a short answer; bear with me. I’m going to start with an abstract point, then back it up with concrete evidence for my pessimism.
Code struggles to escape why it was created. You can often trace the latest version’s strengths and weaknesses all the way back to the goals of the original authors. Is that because of backwards compatibility? Developer cultures? Feedback loops? Yes, and tons of other reasons. The broader effect is known as path dependence, where the best way to predict a technology’s future is to examine its past.
Front-end frameworks are no exception:
- Svelte was invented to embed data visualizations into other web pages.
- 💪 Svelte has first-class transitions and fine-grained updates, because those things are very important for good dataviz.
- 🤕 Svelte’s streaming nonsupport and iconoclastic “𝑋 components↦bundle size” curve make sense if you consider the code Svelte was invented for didn’t do those things in the first place.
- Angular was made to quickly build internal desktop web apps.
- 💪 Angular fans are right about how quickly you can make a functional UI.
- 🤕 If you think about what big-company workstations are like and the open-in-a-tab-all-day nature of intranet desktop webapps, Angular’s performance tradeoffs are entirely rational — until you try using it for mobile.
- React was created to stop Facebook’s org chart from Conway’s Law-ing all over their desktop site’s front-end.
- 💪 React has absolutely proven its ability to let teams of any size, any degree of cooperation, and any skill level work together on large codebases.
- 🤕 As for React’s original weaknesses… Well, don’t take my word for it, take it from the React team:
So people started playing around with this internally, and everyone had the same reaction. They were like, “Okay, A.) I have no idea how this is going to be performant enough, but B.) it’s so fun to work with.” Right? Everybody was like, “This is so cool, I don’t really care if it’s too slow — somebody will make it faster.”
- 🤕 As for React’s original weaknesses… Well, don’t take my word for it, take it from the React team:
That quote explains a lot of things, like:
- React’s historical promises that new versions will make your app faster for you
- The pattern of React websites appointing specialized front-end performance teams
- Why big companies like React, since departmentalizing concerns so other departments don’t worry about them is how big companies work
I’m not claiming those things are wrong! I’m saying it’s a consistent philosophy. For once, I’m not even being snide about React’s performance; the React team are very open about their strategy of relieving framework consumers from worrying about performance, by having the framework authors do it for them. I even think that strategy works for (most of) Meta’s websites!
But so far, it has not made websites overall any faster. And the things the React team have resorted to have gotten odder and more involved. Lastly, our site (and indeed, most sites) are not very similar to facebook.com.
🔮 Note from the future
Because it’s the obvious question to ask: Marko got started when eBay devs wanted to use Node.js, and the business said “Okay, but it can’t regress performance”.
💪 Strength: It didn’t. That’s not typical for JS frameworks.
🤕 Weakness: Marko’s early focus on performance instead of outreach/integrations beyond what eBay uses/etc. also explains why most devs haven’t heard of it.
But can React be that fast anyway? It’s all just code; it can be worked with.
It’s code and path dependencies and culture and a support ecosystem, each with their own values, golden paths, and pewter paths. Let’s examine those concretely — we’ll look at how feasible technically it would be to have React perform at the same speed as the Kroger Lite demo.
Starting with what traits are desirable for MPAs:
- 📃 Streamed HTML
- Incrementally flush parts of the page to the HTTP stream, so pages aren’t as slow as their slowest part.
- 🥾 Fast boot
- If JS reboots on every navigation, it should do so quickly.
- 🥀 Hydration correctness
- Like airplane takeoff, hydration is a few moments where any of a hundred tiny things could ruin the rest of the trip.
- In MPAs, it’s vital to reconcile DOM updates from user input during load, as that “edge case” becomes a repeat occurrence.
- 🏸 Fast server runtime
- If we’re leaning on the server, it better render efficiently.
- Even more important for spinning up distributed datacenter instances, edge rendering, Service Worker rendering, and other near-user processors.
- 🍂 Tree-shakeable framework
- SPAs assume use of all of framework’s features eventually, so they bundle them to get into JS engines’ caches early. MPAs want to remove code from the critical path if it’s not used in it, amortizing framework cost across pages.
- 🧠 Multi-page mental model
- If a component only renders on the server, should you pretend it’s in the DOM?
- If you can’t have API parity between server and client, provide clear and obvious boundaries.
📃 Streamed HTML
I consider streaming HTML to be the most important thing for consistently-high-performance server rendering. If you didn’t read the post about it, here’s the difference it makes:
Important facets of incremental HTML streaming
-
Both explicit and implicit flushes; early
<head>
for asset loading, around API calls, at TCP window size boundaries… - All flushes, but especially implicit ones, should avoid too much chunking: overeager flushing defeats compression, inflates HTTP encoding overhead, bedevils TCP scheduling/fragmentation, and hitches Node’s event loop.
- Nested component flushes help avoid contorting code to expose flush details at the renderer’s top level.
- Out-of-order server rendering, for when APIs don’t return in the same order they’re used.
- Out-of-order flushes , so inessential components don’t hold up the rest of the page (like Facebook’s old BigPipe).
- Controlling render dependencies of nested and out-of-order flushes is important to prevent displaying funky UI states.
-
Mid-stream render errors should finish cleanly without wasting resources, and emit an
error
event on the stream so the HTTP layer can properly signal the error state. - Chunked component hydration, so component interactivity matches component visibility.
Marko has been optimizing those important subtleties for almost a decade, and React… hasn’t.
Additionally, we had a brownfield tax. Kroger.com didn’t render its React app to a stream, so that app had many stream incompatibilities of the kind described here:
Generally, any pattern that uses the server render pass to generate markup that needs to be added to the document before the SSR-ed chunk will be fundamentally incompatible with streaming.
—What’s New With Server-Side Rendering in React 16 § Streaming Has Some Gotchas
🥾 Fast boot
SPAs’ core tradeoff: the first page load can be painful in order for fast future interactions. But in MPAs, every page load must be fast.
Costs in JS runtime boot
- Download time
- Parse time & effort
- Compilation: compile time, bytecode memory pressure, and JIT bailouts
- Execution (repeats every page, regardless of JIT caching)
- Initial memory churn/garbage collection
Only some of those costs can be skipped on future page loads, with the difficulty increasing as you go down:
- Downloads are skipped with HTTP caching.
- Modern browsers are smart enough to background and cache parses, but not for all
<script>
s — either from script characteristics or parallelization limits. - Compiler caches intentionally require multiple executions to store an entire script, and compilation bailouts can thrash for a surprisingly long time.
- Execution can never be skipped, but warm JIT caches and stored execution contexts can slash properly-planned runtimes.
- Memory churn and overhead during load is impossible to avoid — intentionally by the ECMAScript standard, even.
Luckily, v8 moved most parses to background threads. Unfortunately, while that doesn’t block the main thread, parsing still has to finish. This is exacerbated by Android devices’ big.LITTLE architecture, where the available LITTLE cores are much slower than the cores already occupied with core browser tasks in the main, network, and compositor threads.
More on v8’s JS costs and caching, as it’s the primary target for low-spec Androids and JavaScript servers alike:
Does React play nice with JS engines’ boot process?
Remember React’s demos back in the day that got interactive much faster than competing frameworks? React’s lazier one-way data flow was the key, as it didn’t spend time building dependency graphs like most of its peers.
Unfortunately, that’s the only nice thing I found about React and JS engines.
The sheer size of
react
+react-dom
negatively affects every step of JS booting: parse time, going to ZRAM on cheap Androids, and eviction from compiler caches.React components are functions or classes with big
render
methods, which defeats eager evaluation, duplicates parse workload via lazy compilation, and frustrates code caching:
One caveat with code caching is that it only caches what’s being eagerly compiled. This is generally only the top-level code that’s run once to setup global values. Function definitions are usually lazily compiled and aren’t always cached.
React prioritizes stable in-page performance at the expense of reconciliation and memory churn at boot.
React renders and VDOM results are infamously megamorphic, so JS compilers waste cycles optimizing and bailing out.
React’s synthetic event system slows event listener attachment and makes early user input sluggish each load.
Rehydration performance
Rehydration spans all of the above considerations for JS boot, and was the primary culprit in our performance traces. You can’t ignore rehydration costs for the theoretical ideal React MPA.
Performance metrics collected from real websites using SSR rehydration indicate its use should be heavily discouraged. Ultimately, the reason comes down to User Experience: it’s extremely easy to end up leaving users in an “uncanny valley”.
—Rendering on the Web § A Rehydration Problem: One App for the Price of Two
Thus, faster rehydration is almost as common as incremental HTML in the React ecosystem:
- Strategies for server-side rendering of asynchronously initialized React.js components
- Why React Server § Streaming client initialization
- next-super-performance — The case of partial hydration (with Next and Preact)
- react-lightyear
Along with their own caveats, each implementation runs into the same limitations from React itself:
Mandatory
[data-reactroot]
wrappers hurt DOM size and reflow time.React listens for events on each render root, which increases memory usage and slows event handling since previous
===
invariants are no longer guaranteed.-
The Virtual DOM imposes a lot of rehydration overhead:
- Render the entire component tree
- Read back the existing DOM
- Diff the two
- Render the reconciled component tree
That’s a lot of work to show something nigh-identical to when you started!
🥀 Rehydration correctness
Cursory research turns up a lot of folks struggling to hand off SSR’d HTML to React:
- Why Server Side Rendering In React Is So Hard
- The Perils of Rehydration
- Case study of SSR with React in a large e-commerce app
- Fixing Gatsby’s rehydration issue
- gatsbyjs#17914: [Discussion] Gatsby, React & Hydration
- React bugs for “Server Rendering”
No, really, skim those links. The nature of their problems strongly suggests that React was not designed for SSR, and thus uniquely struggles with it. If you think that’s an opinion, consider the following:
React handles intentional differences between client and server render about as ungracefully as possible.
-
If we use React for regions of more-complex interactivity throughout a page, what’s the best way to handle the rest of the page?
- Is it a pain to share state/props/etc. across multiple React “islands” on a page? Do Hooks behave oddly if you do that?
- Can we use Portals to get around this? (Note Portals don’t SSR.)
React’s error handling when rehydrating is… nonexistent. By default, it rejects showing any errors to the user in favor of flashing content or tearing down entire DOM trees into blank nodes.
React 16 doesn’t fix mismatched SSR-generated HTML attributes
—What’s New with Server-Side Rendering in React 16 § Gotchas
That’s… kind of a big deal.
Loses interaction state like focus, selection,
<details>
, edited<input>
s on hydration. Losing users’ work is maddening in the best case, and this issue is magnified over slow networks/bad reception/low-spec devices.Has issues with controlled elements when the boot process catches up and event listeners fire all at once. Note the above demo is using future Suspense APIs to solve a problem all React apps can fall into today.
🏸 Fast server runtime
Server-side optimizations for React are more common than anything else in this analysis:
- Rax
- react-ssr-optimization
- ESX
- react-ssr-error-boundary
- React suspense and server rendering § So what’s the catch?
- …and a million others
(The cynical takeaway is that because developers have to pay for React’s inefficiencies on servers, they are directly incentivized to to fix them, as opposed to inefficiences on clients.)
Isomorphic rendering is not a helpful abstraction for tweaking performance between server-side vs. client-side — web applications often end up CPU-bound on arbitrarily-slow user devices, but memory-bound on servers with resources split between connections.
A fast, efficient runtime can double as Service Worker rendering to streams for offline rendering, without needing to ship heavier CSR for things that don’t need it.
Unfortunately, almost all prior art for optimizing React server render involves caching output, which won’t help for Service Workers, EdgeWorkers, cloud functions, etc. So the suggested “trisomorphic rendering” pattern (which the demo bet on for offline) is probably a no-go with React.
🍂 Tree-shakeable runtime
Omitting code saves load time, memory use, and evaluation cost — including on the server! Svelte’s “disappearing framework” philosophy would be the logical conclusion of a tree-shakeable runtime — or maybe its reduction to absurdity, for Svelte’s pathological cases.
Facebook has considered modularizing React, but didn’t conclude it was a win for how they use it. They also experimented with an event system that tree-shook events you didn’t listen for, but abandoned it as well.
In short: nope.
🧠 Multi-page mental model
The most extreme MPA mental model probably belongs to Ruby on Rails. Rails wholly bets on server-side rendering, and even abstracts its JavaScript APIs to resemble its HTTP-invoked Controller/Model/View paradigm.
At the other end, you have React:
- JSX prefers properties to HTML attributes
- Server-side React pretends to be in a browser
- The ecosystem strives to imitate DOM features on the server
- Differences between server and browser renders are considered failures of isomorphic JavaScript that should be fixed
Pretending the server is a browser makes sense if you only use HTML as a fancy skeleton screen, and hydration is only a hiccup at the start of long sessions. But that abstraction gets leakier and more annoying the more times the original web page lifecycle occurs. Why bother with the className
alias if markup may only ever render as class
? Why treat SSR markup as a tree when it’s only ever a stream?
-
React plus forms (and other DOM-stored state) is annoying, controlled or not — will that be painful when we lean more on native browser features?
- React’s special handling of
.defaultValue
and.defaultChecked
vs..value
and.checked
can get very confusing across SSR-only vs. CSR-only vs. both - React’s
onChange
pretending to be theinput
event, and the resulting bugs
- React’s special handling of
Is it more likely we’d persist SPA habits that are MPA antipatterns if we continue with React?
Using another language/template system/whatever for non-React bits seems not ideal, but that’s how it’s been done for years anyway — especially since React straight-up can’t handle the outer scaffold of a web page.
-
React’s abstractions/features are designed to work at runtime
- Even without a build step, which is an impedance mismatch for apps with build steps. Not an intractable problem, but one with ripple effects that disfavor it.
- Leads to unusual code that JS engines haven’t optimized, such as the infamous thrown
Promise
, or Hooks populating a call index state graph via multiple closures.
-
Many new React features don’t currently work on the server, with vacillating or unclear timeframes for support:
-
.lazy()
and Suspense [EDITOR’S NOTE: yes I know they do now] - Portals
- Error Boundaries
- How long does it usually take for React to bring client APIs to server parity? That lag may forecast similar problems in the future.
-
Speaking of problems in the future…
What about where React is going?
As we’ve seen, there’s much prior art of features desirable for fast SSR: incremental HTML streams, in-page patching for out-of-order rendering, omitting JS for components that never rerender on the client, compiling to a more efficient output target for the server, SSRing React APIs that otherwise aren’t, etc.
There is no reason to think that these individual upgrades are inherently incompatible with each other — with enough glue code, debugging, and linting, React theoretically could have the rendering tricks I found useful in Marko.
But we’ve already gotten a taste of future compat problems with our React SSR relying on deprecated APIs. What about React’s upcoming APIs? Do we want to redo our SSR optimizations with every major version?
This risk is made more annoying by the upcoming APIs’ expected drawbacks:
In our experience, code that uses idiomatic React patterns and doesn’t rely on external state management solutions is the easiest to get running in the Concurrent Mode.
This makes me worry that the previously-mentioned possibility of a state manager to synchronize multiple React “islands” will be mutually exclusive with Concurrent Mode. Less specifically, I doubt we’d be sticking to “idiomatic React patterns”.
In the words of a Facebook engineer:
Better comparison is to a hard fork where we don’t maintain any backwards compatibility at all. React itself might be the sunk cost fallacy.
Essentially, future React will be different enough to break many of the reasons companies rely on it today:
- Knowledge gained from experience with the library
- 3rd-party libraries (in deep ways)
- Established patterns
Potentially worse is how Suspense would affect React’s memory consumption. The VDOM roughly triples the DOM’s memory usage (real DOM + incoming VDOM + diffed-against VDOM), and the “double-buffering” used in Suspense’s examples will worsen that. React also considered slimming its synthetic event system, but Concurrent Mode would break without it.
If you look at how Fiber is written, the architecture truly makes no sense and is unnecessarily complex… except to support Concurrent Mode.
Similar case in design of hooks. If not for concurrency, we woulda just used mutation.
So Concurrent Mode promises smart scheduling about long JavaScript tasks so React’s cost can be interrupted, but the ways React had to change for Concurrent Mode caused other possible issues:
- Worse raw performance from frequently yielding the main thread
- Garbage collection pressure from Hooks
- Memory consumption from multiple simultaneous React trees
- Risks “tearing” during hydration
- Doubling down on synthetic events instead of omitting them for less bundle size
- Existing patterns that didn’t break before, but will — often patterns that libraries needed to eke out performance
What if, instead of trying to be clever about doing a lot of work in the browser, we instead did less work in the browser? That’s why Kroger Lite is fast. It’s not just from box-ticking the features I mentioned, it’s because its technologies were chosen and app code was written in service of that principle.
It may be wise to judge future React not as a migration, but as a completely different framework. Its guarantees, risks, mental model, and benefits are no longer the same. And it really seems to be pushing the depths of framework-level cleverness.
Assume we nail the above; it takes little time to augment React, there are no unforeseen bugs, and teams quickly update components to reap the rewards. We would nevertheless shoulder some drawbacks:
- More painful upgrades to React versions with internal or breaking changes
- Required linting/CI/etc. to ensure React features aren’t used in ways or contexts that would cause problems
- Unknown compatibility issues with ecosystem code like Jest, 3rd party components, React DevTools, etc.
Open questions:
- What new rehydration problems will we see with Concurrent Mode, async rendering, and time-slicing?
- Will hook call order remain persistent across React islands, Suspense, deferred updating, different renderers, combinations of all of those, or scenarios I haven’t anticipated?
Conclusion
This analysis is harsh on React’s MPA suitability. But is that so odd?
It was created to client-render non-core bits of Facebook. Its maintainers only recently used it for server rendering, navigation, or delivering traditional web content. In fact, its SSR was a happy accident. And finally, longstanding evidence holds React trends antagonistic towards performance.
Why would React be good at the things we ask it to do?
With the FB5 redesign, Facebook is finally using React in the ways that we are, and they have found it wanting. On the one hand, this means React will surely become much better at desirable SSR features. On the other, when this will happen is unsure, it will heavily change React’s roadmap, and React could change so much that familiarity with how it works today could be a liability rather than a strength.
For the target audience of rural/new/poorly-connected customers, does Facebook even use React to serve them? Did FB5 change anything, or does
m.facebook.com
still not use React?If we want a version of Kroger.com as fast as the demo, but using the same framework, processes, management, and developers as the existing site — wouldn’t that just become our existing site? We can’t change our personnel, but we can change the technologies we build on.
Last, but certainly not least: can you make an industry-beating app out of industry-standard parts?
-
Men would rather ignore 20% of the US than stop using React.3 ↩
-
Well, one browser can natively parse
text/x-component
, sorta. I guess RSCs reused the MIME type as an easter egg? ↩
Top comments (13)
Based on my experience, the choice of framework is important, but it is even more important that software engineers understand what they are writing and how they are writing it. The framework organizes work, but does not guarantee a fast and stable application. The people factor is the weakest link here.
Great analysis. It doubles-down on what I think most people already know, which is that React is a pig. Nevertheless, I've chosen to stay w/ React despite spending far too much time porting my app to alt frameworks like SvelteKit, SolidStart, and QwikCity. With each alt, I found, despite much preferring the DX, I was not being near as productive. People say React eco is just a bunch of vanilla JS libs wrapped for React. I wish that were true, but it's not. For example, there is no declarative Framer equivalents for vanilla JS, and there are a plethora of mature UI frameworks to choose from for React (I use react-aria). Once a react-alternative emerges w/ critical mass to drive widespread 3P lib support from something more than single-person efforts (that often go abandoned) I'll reconsider a change. If I had huge dev team to fill in the gaps, I might feel differently.
Thanks for assembling a lot of very specific points against React far beyond my initial gut feeling when I first had to use it that it felt wrong and overengineered to use so much client side JavaScript to achieve things most of which should have been plain vanilla HTML and CSS. But web performance is always an important consideration.
The personal apps I make are just with MPAs, offline first. I use a little library I made called
mpa-enhancer
. It works well enough. For really interactive parts of a web page I should probably use a front end framework, or something simple like that. But I have limited time and writing pages in a templating-style is extremely straight forward and easy to work with.But a lot of these JS libs always fascinate me. Like VanJS, Solidjs, Svelte, HTMX, and Marko. All really neat. But, like I said, with limited time
mpa-enhancer
does the job good enough.React is not a good framework -- in fact it's supposedly a library not a framework -- but it is the hands-down leader if you want to get a front-end job. But it's one of the worst choices you could make if you have any flexibility. It's full of limitations and workarounds and quirks to the fact that it isn't a framework and was designed with goals that don't match the common use patterns of today.
It was a step forward at the time, and a great learning experience, and a reasonable choice if there weren't better choices. Fortunately, there are many alternatives that are much better choices.
For React developers, the best choice is probably SolidJS, as it will be very familiar, but without the quirks and limitations. It's very similar in syntax and look, but has fundamental differences. State isn't tied to components like in React (breaking MVC-like patterns). It is by definitional optimal, in terms of DOM updates. So much that its performance is roughly tied with vanilla JS, faster than most of the other choices, and much much more performant than React. Components run once, unless you define effects to run on reactive data updates (and only the effect runs). It's simple to understand, everything is just a normal function.
Other reasonable alternatives would include Svelte, or even Vue (although I've gotten a little cold towards Vue in Vue 3 -- it's getting complicated like React). But I think I'd rather use Angular or just jQuery than React. It is a drain on productivity and comes bundled with technical debt and code pollution.
It's time to move towards something better. Solid (or Svelte, or Vue). React is just horrible in comparison.
I think svelte addresses all react pain points very well
To continue your thought - Intention of the creators for creating the framework Svelte
My only (and biggest) personal pain point in using svelte is the missing IDE support in Intellij... Thats why I still use React. 2-3X App-Performance gain is nice - but I can't pay for it with a decrease of development speed with more than 25% (the jump from Intellij -> vscode). If you use vscode as your main IDE this does not apply to you.
Your link/quotation collection is awesome
Thank you
Thank you
Fantastic analysis as usual Taylor, well done.
Thank you. This is the 1st time I'm hearing the term "HTML Streaming". One of my professors called its opposite "The White Screen of Death".
Marko is indeed a fine choice for a lot of commercial sites, but if you want something that looks more familiar to react developers, maybe try Solid.js.
Astro is an MPA island framework with very similar DX to React.. in fact you can use React islands to slowly migrate away… to Preact or Solid or whatever…