DEV Community

Cover image for Learning to Appreciate React Server Components
Ryan Carniato for This is Learning

Posted on • Updated on

Learning to Appreciate React Server Components

This is my personal journey, so if you are here hoping for the general "How To" guide you won't find it here. Instead, if you are interested in how I, a JavaScript Framework author, struggled to see obvious things right in front of me, you're in the right place. I literally had both pieces in front of me and was just not connecting the dots.

It isn't lost on me I'm talking about a yet-to-be-released feature like it is some long journey, but for me it is. If you aren't familiar with React Server Components this article will make no sense. You see we are on the cusp of a very exciting time in JavaScript frameworks that have been in the making for years and we are so close you can almost taste it.

In the beginning there was Marko

Now you're probably thinking isn't this an article about React Server components. Shhh... patience. We're getting there.

You see I work 12 hours a day. 8 hours of that is my professional job where I am a developer on the Marko core team at eBay. Then after some much-needed time with my family, my second job starts where I am core maintainer of the under-the-radar hot new reactive framework Solid.

Marko is arguably the best on-demand JavaScript server rendering framework solution to date from a technical perspective. I'd argue not even close but maybe that's a bit biased. But the benchmarks so declare and the technology is something that every library envies(yes even React, but we will get to that).

If you aren't familiar with Marko it's a compiled JavaScript framework like Svelte that started development in 2012 and reached 1.0 in 2014. And what a 1.0 that was, considering it shipped with progressive(streaming) server rendering and only shipping JavaScript to the client needed for interactivity(evolved into Partial Hydration). Two of the most coveted features for a JavaScript framework in 2021.

But it makes sense. Marko was made as a real solution for eBay at scale from the start. It was aggressively pursued and within a couple of years had taken over the majority of the website. It replaced the Java that was there as a full-stack solution from the start. React's path to adoption at Facebook was much more incremental.

Now Marko had come up with quite an interesting system for Progressive Rendering in 2014. While really just an example of using the platform, it was oddly missing from modern frameworks. As Patrick, the author of Marko describes in Async Fragments: Rediscovering Progressive HTML Rendering with Marko

Instead of waiting for an async fragment to finish, a placeholder HTML element with an assigned id is written to the output stream. Out-of-order async fragments are rendered before the ending <body> tag in the order that they complete. Each out-of-order async fragment is rendered into a hidden <div> element. Immediately after the out-of-order fragment, a <script> block is rendered to replace the placeholder DOM node with the DOM nodes of the corresponding out-of-order fragment. When all of the out-of-order async fragments complete, the remaining HTML (e.g. </body></html>) will be flushed and the response ended.

Automatic placeholders and insertions all part of the streamed markup (outside of the library code) is super powerful. When combined with Marko's Partial Hydration it meant in some cases there was no additional hydration after this point as the only dynamic part of the page was the data loading. This all delivered in a high-performance non-blocking way.

Render-as-you-Fetch

I had never heard of it referred to this before reading React's Suspense for Data Fetching docs but you better believe I'd hit this scenario before.

You don't need Suspense to do this. You just have the fetch set the state and render what you can which is usually some loading state. Generally, the parent would own the data-loading and the loading state and coordinate the view of the page.

GraphQL took things further with the ability to co-locate fragments with your components. In a sense, you are still giving control of data fetching higher up the tree to allow for orchestration, but the components and pages could still set the data requirements. However, we have a problem here still when code-splitting enters the picture. You end up waiting for the code to fetch before making data requests when navigating.

Facebook had solved this with Relay which with strict structure and tooling could properly parallelize the code and data fetching. But you can't expect everyone to use that solution.

The problem is by simple JavaScript means you can't split a module. You can treeshake unused code. You can lazy import a whole module. But you can't only include the code you want at different times. Some bundlers are looking into the possibility of doing this automatically, but this isn't something we have today. (Although it is possible to use virtual modules and some bundler sorcery to achieve this)

So the simple solution was to do the split yourself. The easiest answer is not lazy load the routes but to make a HOC wrapper for each. Assuming there is a Suspense boundary about the router you might do this.

import { lazy } from "react";
const HomePage = lazy(() => import("./homepage"));

function HomePageData(props) {
  const [data, setData] = useState()
  useEffect(() => /* ... load the data and set the state */)
  return <HomePage data={data}  />
}
Enter fullscreen mode Exit fullscreen mode

I used this approach relentlessly in my Solid demos to have the quickest loading times. At some point last summer I decided that this was mostly boilerplate. If I was going to make a file-based routing system for our new starter similar to Next.js I wanted this solved. The solution was to build a data component route into the router.

One simply writes their components in pairs. homepage.js and homepage.data.js and if the second is present the library will automatically wire this up and handle all the code splitting and parallel fetching for you even on nested routes. Instead of wrapping the child, the data component would return the data.

From server vs client perspective the library providing a constant isServer variable would allow any bundler to dead code eliminate the server only code from the client. I could make data components use SQL queries on the server, and API calls for the client seamlessly.

React Server Components

On December 21st 2020, React Server Components were previewed. And I just didn't see them coming. I was blindsided in that the main things they were trying to solve already had a solution. Suspense on the server was completely doable and so was parallelized data fetching around code splitting.

Being able to identify which components didn't need to be in the client bundle was nice but manual. It was something Marko had been able to auto-detect with its compiler for years, and if we are talking interactive SPA I just wasn't seeing it. Especially if it increased React's code size by more than 2 Preacts(standard unit of JS framework size measurement). Anything being done here could easily be done with an API. And if you were going to be designing a modern system that supports web and mobile why wouldn't you have an API?

Something Unexpected

Adam Rackis was lamenting React's handling of communication around Concurrent Mode and it spawned a discussion around seeing React's vision.

Eventually, Dan Abramov, the gentleman he is, decided to respond (on the weekend no less) in a less volatile forum in a Github issue addressing where things are at.

This stood out to me:

For example, we didn't plan Suspense at all in the beginning. Suspense came out of a streaming server renderer exploration.

Suspense was the first of the modern features announced back in early 2018, as the technique for lazy loading components. What?! This wasn't even its original intention.

Suspense for Streaming SSR makes a ton of sense if you think about it. Server-side Suspense sounds a whole lot like Patrick's take on Out-of-Order progressive rendering in Marko.

As consumers of a product we tend to take in each new piece of information in the context of the order we receive it. But have we been deceived? Has React actually been working on the features backwards?

I can tell you as a framework author establishing stateful primitives seems like it should be the first step, but Hooks didn't show up until late 2018. It seems Hooks were not the starting point but the result of starting at the goal and walking back to the possible solution.

It is pretty clear when you put this all in the context of the Facebook rewrite, the team had decided that the future was hybrid and that something like Server Components was the end game way back as far 2017 or possibly earlier.

New Eyes

Understanding that all the other pieces started falling into place. What I had seen as a progression was actually like watching segments of a movie playing in reverse.

Admittedly I had suspected as much but it suggested that they had worked through a lot of these render-as-you-fetch on the server scenarios much earlier. One has to assume they had gotten to a similar place to my data components at some point.

I also happened to be playing with Svelte Kit this week and noticed their Endpoints feature. These give an easy single file way to create APIs that mirror the file path by making .js files. I looked at them and realized the basic example with get was basically the same as my .data.js components.

So what does it take for the file-system-based routing to notice .server.js files and preserve them as data components on the server as well as convert them into API endpoints, and auto-generate a call to that API endpoint as the data component for the client? With Vite less than you might think.

The result: you have code that always executing on the server. Even after the initial render. Yet is just part of your component hierarchy. A virtual return of "the monolith" in a single isomorphic experience.

It really doesn't take much more to put together what would happen if the data was encoded JSX(or HTML) instead of JSON data. The client receiving this data is already wrapped in a Suspense boundary. If you could stream the view into those Suspense boundaries the same way as on the initial render it would close the loop.

Closing Thoughts

So the evolution of the idea is actually a pretty natural one. The fact that many platforms are API-based and don't need "the monolith" is beside the point. Server Components are really the extension of the ideas around parallelized data loading and code splitting we've already seen in Facebook's Relay.

Am going out now looking at how to implement them everywhere? Probably not. Marko has shown there are other paths to Partial Hydration and aggressive code elimination. I'm going to continue to explore data components before looking at the rendering aspect. But at least I feel I better understand how we got here.

Top comments (8)

Collapse
 
peerreynders profile image
peerreynders

homepage.js and homepage.data.js

The thing is that the example of retrieving data for an entire page and injecting that data into a "component" to render the appropriate user representation is easily justified - meanwhile the example from Data Fetching with React Server Components where each component fetches its own data evokes all the controversy that goes along with using active record as an architectural pattern - mostly because component boundaries are often too finely grained to align with optimal data access and query boundaries. Fetching an artist details, top 10 and discography separately only makes sense if that information is managed by separate services - and even then those services will usually be queried for data relating to multiple artists. Ideally a visual component should receive just enough data to do its job while having PI (persistence ignorance), i.e. it should have no dependency on the fundamental organization of the data.

From that perspective having fine-grained components that fetch their own data should raise some eyebrows.

In Complexity: Divide and Conquer! Michel Weststrate states:

If the view is purely derived from the state, then routing should effect the state, not the derived component tree.

By the same reasoning visual components should appear as a result of state change - the appearance of visual components should not result in a change of state.

And in terms of coupling, data fetching components have the whole banana, gorilla, jungle thing going on.

From server vs client perspective the library providing a constant isServer variable would allow any bundler to dead code eliminate the server only code from the client.

Marko's optional split components seem more straight forward:

  • the x.marko template describes the user facing HTML fragment.
  • the x.component.js module describes the server side functionality to instantiate the template.
  • the x.component-browser.js module describes the code that takes control of the fragment on the client-side.

x.component.js can assume a Node.js environment while x.component-browser.js is tailored to the browser environment - and common code is handled via shared modules.

It really doesn't take much more to put together what would happen if the data was encoded JSX(or HTML) instead of JSON data.

... but it still has to be decoded. A straight up HTML partial spliced into the DOM is intuitively fast.

Decoding some non-standard stream of elements for the VDOM - perhaps including code for some nested client-side components - may not be that much faster than decoding JSON data with a custom client-side component.

Collapse
 
ryansolid profile image
Ryan Carniato • Edited

Yeah I thought that example in the video to be illustrative rather than best practices. I don't expect there to be many server components fetching data in practice if you want good performance. I think the problem is apps already fetch below the navigation fold and they want an easy way to address it.

I do find I make these sort of concessions in Solid compared to React. React solutions tend to be general, and I tend to focus on a category. My application of concurrent rendering is limited mainly to linear progressions like navigation where React's concurrent mode could render 10 different possible futures.

Same here as I think co-location still requires logical roots. I like the idea of giving components the say in what they fetch the way GraphQL fragments do but it's more of a secondary tree that starts from the .data.js component rather than a bunch of isolated nodes. This lets us build-up to the single query again. I suppose Server Components on their own don't help. It moves the inefficiencies to the server.

The thing I like about isServer is generally I think this gets buried in a service. Like Solid has a special primitive for data loading called a resource that takes an input signal (that is tracked), and a fetcher that receives the value of that signal and returns the promise.

const [user] = createResource(() => props.userId, fetchUser)
Enter fullscreen mode Exit fullscreen mode

I'm more likely to bury isServer in the fetchJSON call. I will just swap out the database call for the API call. Now this doesn't help with partial hydration but it means that most of the component code looks isomorphic. Until I toyed with a .server.js idea I really wanted authoring the pages and wiring up the data to really be thinking client vs server. Marko's split works because it uses Single File Components. Not something we will see JSX libraries pick up.

... but it still has to be decoded. A straight up HTML partial spliced into the DOM is intuitively fast.

Yeah but React is talking about diffing it. It is arguable the performance savings of running through a diff and patch but they are touting preserving browser state like element focus. Now not sure which stateless elements will be carrying focus there, but presumably they could re-render on the server and only update the content of one textNode. I'd argue if they just sent the data my fine-grained reactivity could have done that with less over the wire, and no diffing. But they saved sending the component JS.

But the thing I wonder is, in fine-grained reactive approach component size scales with amount of interactivity. The largest static component is still:

const tmpl = makeTemplate("Giant HTML String")

function MyComp() {
  return !isHydrating && tmpl.cloneNode(true);
}
Enter fullscreen mode Exit fullscreen mode

Most of the component is the HTML string. If I send that in a JS file I fetch in parallel or send it over the wire each time what am I actually saving. The problem is doing both on initial render if you never render it again on the client. Not too hard if you can detect it being stateless to just have the component no-op and treeshake the template.

Collapse
 
peerreynders profile image
peerreynders

The thing I like about isServer is generally I think this gets buried in a service.

My personal perspective is that isServer prompts the whole Replace Conditional with Polymorphism scenario - or in this case don't complect things that are separate.

I think a case can be made that isServer is symptomatic of a design error. CSR frameworks deliberately design around components being client centric: components are created on the client, live on the client and die on the client.

Introducing SSR and Server-Side Components violates that fundamental assumption of CSR design. Typically addressing such a "design error" without fundamentally changing the underlying design leads to accidental complexity. And I think that's what's happening with React - its core is taking on more and more complexity to accommodate the illusion of its (beloved) abstraction while having to deal with the realities of its operational environment.

Contrast that with a framework that from the get-go frames the problem entirely differently:

  1. Server side: Component needs to be rendered and perhaps some initial UI state. No behaviour required.
  2. Client side: Component is already rendered and perhaps has some initial UI state. UI state dependent behaviour still needs to be bound to it.
  3. Client side: Component is created entirely on the client.

CSR frameworks only worry about the last case - they are "incorrectly framed" to address the the other two cases down the road (with predictable consequences).

Marko's split works because it uses Single File Components. Not something we will see JSX libraries pick up.

My view is that it works for Marko because the template can be factored out as a commonality between the server (as an HTML fragment) and the client (as a DOM fragment). The issue is that most JSX code conflates code to control templating with code to enable interaction behaviour. Code to control templating is needed on both the server and client side. Interaction behaviour is purely a client-side aspect - therefore should be optionally injected (perhaps at compile-time) into rendering.

Collapse
 
devinrhode2 profile image
Devin Rhode

I love how you write a lot on twitter and blog a lot, it's very fun and educational to have a front-row seat watching something like Solid come to fruition :)

Collapse
 
ryansolid profile image
Ryan Carniato

Yeah I mean who said Learning in Public is just for beginners. Sure that is the widest audience since we are all beginners at some point. But I'm constantly learning things. Maybe things that are less applicable, but who knows maybe my content will help the next would be framework author or performance specialist.

Collapse
 
yufengwang profile image
LadiesMan217

You work 12 hours a day, that's amazing.

Collapse
 
ryansolid profile image
Ryan Carniato

Well not every day. I have my weekends. One thing to consider is that work I do in evenings is very varied, sometimes it's coding, sometimes it's planning, sometimes it's writing articles like this. Even with that I definitely take some nights off to refresh.

But doing a bit almost every day just keeps things moving forward. I've been doing it this way for about 4 years now so I understand it's a long term goal I'm working towards.

Collapse
 
aalphaindia profile image
Pawan Pawar

Good one!