DEV Community

Cover image for The Return of Server Side Routing
Ryan Carniato for This is Learning

Posted on • Updated on

The Return of Server Side Routing

Return? It never went away. Or at least that is what some smug "told you so" is going to say. But for those who haven't been living under a rock for the past decade, for better or for worse much of the web has been moving to client-side navigation on their sites.

This movement has been fueled by the adoption of tools that support this sort of architecture. The "modern" JavaScript framework is designed to build applications. Single Page Applications. A name originated from the fact that it does not go back to a backend server to navigate between pages. All routing happens in the browser.

It started with web applications but React, Angular, Vue, and co.. have permeated every industry and every type of web experience imaginable from the grandiose scale of the most successful technology companies to the "Hello, I'm Jane" page made by a high school student creating a portfolio for her college acceptance. Local business to eCommerce giant, government agencies, news sites, and everything in between we've seen a steady migration.

But like all things, there is potential for too much of a good thing. JavaScript has blown open the ceiling of what can achieve in a web experience, but it comes with a cost. A cost paid most dearly by those without the best devices or the fastest networks but also felt by anyone when things don't go according to plan.

And it is something that those who see themselves as stewards of the web are very concerned with. On both sides of the discussion. By this point, it should be clear that it may be difficult to achieve a one size fits all solution, but there are definite improvements to be made.

The common thread is to send less JavaScript to the browser seen most recently championed by 0kb of JS frameworks. But I want to expand on this as the repercussions are about more than progressive enhancement or lazy hydration. Everything is converging on architectural change that we have not seen the likes of since when SPAs came on the scenes over a decade ago.

We're putting routing back on the server.


Multi-Page Apps (MPA)

Image description

So we're back to PHP and Rails? No. I hope that doesn't disappoint anyone. Every time around we aren't the same as we were the last time. But it isn't a terrible starting point. The majority of the web never needed to be more than just a site that renders some HTML. And most JavaScript frameworks let you generate a static site, or maybe at least some static pages within your Single Page App to keep low interaction pages quick and light.

But we've been there and we know that for all the AlpineJS', Stimulus', and Petite Vue's, we've become accustomed to the Developer Experience perks of our favorite frameworks, and authoring a second app on top of the first is far from desirable. But for most solutions, it is all or nothing. Include the <script> tag or not. Beyond the simplest of requirements, this is a parlor trick rather than an experience.

Instead, we've seen a huge growth in the space of what we used to call widgets back in the early 2010s but now refer to as Islands. These independent islands are a bit more capable though as they can be server-rendered and hydrated with the latest tooling such as Astro, Slinkity, and Iles. This is a coarser-grained approach that does well for many sites but we've seen more sophisticated tools in this space designed from the ground up with this in mind like Marko or Qwik employed on the largest of eCommerce solutions.

But regardless of how it's done when you navigate on the server, you can know certain parts of your page never will be rendered in the client. You can reduce the JavaScript sent and executed dramatically. Mileage will vary but even things like eBay's landing page have been reported to show an 80-90% reduction in code size from this technique.

Still, this isn't the end of the story because while full server reloads work well for many sites, we've become accustomed to the benefits of being able to preserve client state in SPAs and to do smoother transitions.


HTML Frames

I haven't found a name for this but it is used by a few tools, most notably Turbo as part of the Hotwire framework for Rails. But the approach is applicable elsewhere. Essentially intercept all link clicks or form submissions and disable the default behavior, then request the new location of the screen and replace the contents of the <body> when it completes.

We can have our MPA, have the server handle the route but navigate in the browser preserving our JavaScript app state. As each panel loads in we hydrate it and since we know it can only be rendered on the server all the same optimizations above apply.

However, now we need JavaScript to orchestrate this sort of transition. Not a lot of JavaScript. Many MPA frameworks load a small boot-loader anyway if they support lazy hydration but in the pure MPA, it is possible to not need any runtime.

While less heavy this approach still isn't SPA smooth. Loading HTML from the server and replacing what was there might persist app state but nothing in the DOM. No focus, animations, player position on a video tag, etc... This brings us to the next thing.


Server Components

Image description

Is the answer coming from React of all places? React Server Components are very restrictive in a way that is almost identical to how islands work. You can't nest Server Components(the "static part") in Client Components(the "islands") except through passing as children.

In practice, this means Server Components are like MPAs, except you can go back to the server to "re-render" the static part of the page as a VDOM and have the browser receive that and diff the changes. Even though client components are preserved and parts of the static HTML that never change are not replaced, we are essentially talking about a routing paradigm.

When you click a link it is intercepted and the server component endpoint handles the request, returning the new VDOM to be diffed. When you perform a mutation that would update data on the page, the full page is re-rendered on the server and the new VDOM representation is sent back. It is a lot like a classic form post you'd do with an MPA.

The tradeoff. Well, that's a lot of data to send along the wire every server re-render but in comparison to an MPA, it isn't really. This also needs much more orchestration than the other methods. You need a framework in the browser. So this approach won't necessarily get you the fastest page loads. But it has the same capacity to eliminate huge percentages of component code sent to the browser unnecessarily.


Analysis

These are 3 distinct solutions. It isn't just like one supplants the other. A pure MPA has the potential for the best page load performance. HTML frames are the most optimal of the 3 for navigating to new locations. Only Server Components have the potential to be indistinguishable from the Single Page App experience we have today. But all 3 approaches share the same model for how navigation should work. It is full-page and it is from the server.

It isn't just this pushing us this way. Consider frameworks like Remix or Sveltekit that promote Progressive Enhancement. This naturally has you fallback to doing form post-backs and full-page navigations.

Next, consider things like React Query. It has become more and more common to refetch all the related resources than perform direct cache updates on mutation. Remix's optimistic updating forms are another example of this. They use the route structure to refresh all the data on mutation.

In essence, instead of trying to bring a bunch of expensive caching logic to the browser, you take a refetch first mentality. And compared to reloading the whole page for rendering it isn't that bad. The benefit is ensuring consistency of page data without a bunch of extra client code. Have you seen the size of the leading GraphQL clients? About 40kb gzipped. Simply putting that and React on the same page gets you over the size budget of any performance-critical site before you write a line of code.

This progression all points to the same thing. We're heading back to routing on the server.


Conclusion

Given this, I have some thoughts for the future. The way I think this plays out is that MPAs as a technology stay as they are and continue to improve their ability to do better partial hydration, smarter lazy loading, more dynamic delivery (streaming).

I think pure HTML Frames are an intermediate step. As new approaches come out for Server Components, especially non-VDOM ones, we will see them get absorbed. The ideal approach is to have Server Components be able to both provide the ability for fine-grained updates and be able to send HTML for newly rendered things. HTML rendering is going to be faster for initial page load or any large navigation. Supporting hybrid/partial formats may be a thing.

Where this gets interesting though is when we can apply tricks that we've learned from SPAs to this. Nested routing especially comes to mind as each section is a logical top-level entry point that can update independently in many cases. Routing is the backbone of everything on the web.

Honestly, when we blur these lines a whole lot is possible still without building in a way that pushes everything into the browser. We can scale from simple full-page reloaded MPA to the most sophisticated apps. Maybe these are the #transitionalapps Rich Harris predicted. But as far as I'm concerned there is only one way to find out.

Let's get building.

Oldest comments (29)

Collapse
 
ravavyr profile image
Ravavyr

Just wanted to say this is a nice write up, pretty extensive, but I don't think it hits the nail on the head.

What everyone seems to forget is that frontend web dev was never meant to be written the way it is today.

With Angular, React, Vue we allowed hardcore programmers/engineers from companies like Facebook and Google dictate how frontend devs should build websites.
If you look at the core of each one, it's a beast, it's super overly complex thinking for something that's always been as simple as HTML, CSS and a little JS.

Frontend was never supposed to be this complicated. the frameworks abstracted stuff and essentially just wrong their own "way of writing javascript" to the point that you have to learn more about the framework than about the languages in order to get things to work.

Has anyone stopped to ask. "How do you build a SPA from the ground up?"
I did, i spent a few weeks on it and I created a simple script that lets me build SPA sites. taino.netlify.app , it's 13kb, 3kb compressed. One script, requires no NPM install, no build time and it just works.
That site is small you'll say, just 7 pages, of course it works, what about a real application? Well, I created indiegameshowcase.org using Taino JS and it has user accounts, teams, games, and dynamically rendered content pulling from a Node/MySQL server that all shows up on google [though the ranking sucks because of competition and i'm not doing active SEO content on it]

Yes, it's not perfect, I'm one guy, not a 1000-dev strong company like FB and Google, but yet, i showed you can easily build 10 page sites that load blazingly fast, are SEO friendly and fits on a floppy disk including the assets if you've compressed them well enough.

As for server-side routing, the only reason stuff like that was re-invented for the JS frameworks is because they were all terrible for SEO. Google and other bots couldn't read them. 404s read like 200s and here we are.

Now, yes, SSR is nice and all, but it's not something we absolutely NEED anymore. Google can read your JS sites just fine. The frameworks just suck at it because their bundles are large, their rendering logic means the bot has to parse abunch to get to the content, and they shove way too much into the render, so google just doesn't like it.
Again, what I wrote allows you to render HTML within JS and it's clean enough for bots to read as long as they support JS rendering [which they're all coming around to anyway for the big frameworks], but i'm just one guy.

Just my two cents.

Collapse
 
ryansolid profile image
Ryan Carniato • Edited

I'm never sure when to preface it but I'm the author of SolidJS. I've been working on it for almost 6 years now and spent the first few creating it on my own. So I think I can relate with some of the sentiment here:

Has anyone stopped to ask. "How do you build a SPA from the ground up?"

That's basically what I've been chronicling for the last few years across dozens of articles on medium and now on dev.to. The thing is when I built Solid it was for myself (and maybe the startup I worked at). I reveled at the small size (Solid has grown in features but the treeshakeable core for most apps is around 3.9kb compressed) and the performance topping almost any benchmark I could find. I even made claims that my client side framework could outperform their server side frameworks at even the metrics they claimed SSR benefitted.

I wasn't wrong.

I wasn't right either.

In fact this sentiment was all too popular about 5 years ago and I feel that was our real failing. We let things progress beyond their original design before they were ready for that use case. I'm not going to say there isn't a bit of a misalignment here. The more popular these solutions get the more likely people are trying to apply them to places they were not designed for. Often simpler solutions fit better.

During the process I was hired by eBay to work on their framework, Marko. And one thing you realize there is not everyone has the same device or network, and that not every organization can deliver products in a way that doesn't require technology infrastructure to manage people as the product.

The one common thing I found suddenly being thrust into that world, and interacting with the maintainers of these libraries is they all care. In some cases the needs of an eCommerce giant or a Meta or Google are going to be different and the work caters to that, but I've never seen any indicator this is a product of that environment in terms of a pattern for excess.

In fact most core teams of frameworks or popular libraries are much smaller than you'd think. The Marko team at eBay was 2 people for 4 years. React core team I believe is around 8. These teams work like small cell in the midst of those 1000s of engineers. Honestly coming from a startup it felt familiar. And they are the ones trying to encourage best practices, trying to design things in a way that cause the least harm.

And like it or not, people like using these solutions so much they continue to want to use them anywhere they can. It might not be correct but there is huge interest and the tools are shifting to address that reality. Take it or leave it as just one guy to another, but having played on both teams this is bigger than you or me.

I'm glad you gained the experience of creating that framework. I've felt more fulfillment doing that than I have experienced during work over the years. If it is what you believe in share what you've built, get people excited about it. Show people a better way to do things, that's why we are all here. Speaking from experience you never know where that can take you.

Collapse
 
rxliuli profile image
rxliuli

Matthew effect. . . To be honest, the hooks mechanism of solidjs combined with tsx has greatly improved the pain of manual dependency management of react hooks, but its ecology is too small. IDE (webstorm/vscode), ui library, routing, state management, testing, graphql integration, there are too many things to deal with, the ecological isolation between front-end frameworks is already so high that it is almost impossible for the production environment to convince others Use an unknown framework

Thread Thread
 
ryansolid profile image
Ryan Carniato

Definitely. This field also benefitted from the fact that constant newcomers were entering it at an incredible rate, and the range of projects and platforms it can apply to was increasing. Maybe we are passed that point now. But technology tends to go in phases. I've been in no particular rush. But yeah I doubt there will ever be another React without something colossal changing.

Collapse
 
ravavyr profile image
Ravavyr

I fully agree with all the above. I've been in the industry for some 16 years and i've seen large corporations fail to implement proper practices, and large dev teams that can't product what a 2-man team could. I still deal with a lot of that day to day.

I understand the communities support the frameworks we have, and frankly, they're a better solution than the old stacks, then again the old stacks work too and like you said it depends on the size and needs of each company.

I've planned/designed various tools, applications, websites for different sized companies and know how that goes all too well, to the point where personally I feel everything should be custom and catered to what the business needs are, but that's a discussion for another time.

As for my Taino JS, it was a proof of concept. I don't expect people to pick it up and use it, but i hope those who check it out learn a bit from it, so they're not so reliant on the larger frameworks without understanding what's happening at the language level itself, or at the browser level. Frameworks are great, until there's a bug and none of the devs can solve it because they don't really understand how the framework works. And this is more common than not, otherwise we wouldn't have thousands of issues on github repos regarding these things.
I also don't understand where people find the time to write such beautiful documentation and community sites for their projects, haha.

I'll have to check out SolidJS and see what it's like. Kudos on that, it looks "solid" :)

Collapse
 
fakeclifcarter profile image
Clif Carter

"SSR is nice and all, but it's not something we absolutely NEED anymore."

Are you saying some webapps/apps, going forward, won't need it? Or that we shouldn't use SSR at all, or only use it for a very small number of use cases?

Collapse
 
ravavyr profile image
Ravavyr

lol, some will ALWAYS use it :)
Some devs don't know any better and will continue using the old stacks that all do SSR but render slower
Some devs will use new frameworks and the SSR tools they provide because of SEO and other reasons
Some devs won't use SSR because you don't need it in order to build a functional website that ranks well on SEO. You just need to understand how google's bots work.
Some devs who understand it will still use SSR because of specific business needs or because they want to appeal to bots beyond google and bing.
Some devs won't use SSR because they don't know what it means or does.

The real questions are always:
What are you building? Do you care about SEO? Do you care about performance?
Do you understand all three enough to do it the best way for that project?
Also, "best" is relative as you can build a website/app a dozen different ways and achieve the same result.
In the end though, how sure are you that your results are in the top 10% for performance and SEO?
That's what it comes down to.

Thread Thread
 
fakeclifcarter profile image
Clif Carter • Edited

Thank you for replying! Yeah, I just listened to a podcast about caching:
Easy to implement. Hard to implement efficiently and correctly. Or at least it takes time to understand it in depth enough to implement correctly.

Every app is different. Learning what tech/framework/etc. will work great for you requires dedicated upfront time to research.

An upfront cost that is always worth it

Thread Thread
 
ravavyr profile image
Ravavyr

So Caching is a funny beast. I agree, it's worth implementing for many systems, but you gotta remember, with JS frameworks, much of what was once cached, can now simply be served as static sites.
So really you do very little caching, though one might say "static code" == "cached code".
Static sites would make api calls, meaning you either cache the api results or setup scalable environments for the dynamic data.

Collapse
 
trungjamin profile image
Comment marked as low quality/non-constructive by the community. View Code of Conduct
Info Comment hidden by post author - thread only accessible via permalink
TrungJamin

a

Collapse
 
trungjamin profile image
Comment marked as low quality/non-constructive by the community. View Code of Conduct
TrungJamin

a

Thread Thread
 
trungjamin profile image
TrungJamin

b

Thread Thread
 
trungjamin profile image
TrungJamin

c

Thread Thread
 
trungjamin profile image
TrungJamin

d

Collapse
 
ashleyjsheridan profile image
Ashley Sheridan

I think a large part of the problem is the "works on my machine" mentality which is all too easy for any web dev to slip into.

We sometimes forget our machines which are capable of running powerful IDEs, editors, and servers, are a lot more powerful than the average persons own device.

Single page apps can reduce the number of calls being made to a backend, and reduce the individual page load data, but sometimes we're focusing too much on the trees and not the forest. Those SPAs are huge in comparison, and with things like CSS-in-JS that further reduce the browsers own capability to cache things, we're introducing larger upfront load times. This all has a big impact on performance and accessibility.

Collapse
 
dangle profile image
Dang Le

so damn right

Collapse
 
btakita profile image
Brian Takita • Edited

Ruby on Rails had something like this...back in 2006. Behold the marvels of the hotness that was RJS, allowing you to dynamically update the browser with your favorite language, Ruby, without having to deal with that messy JavaScript rubyonrails.org/2006/3/28/rails-1-...

jQuery was not even 1 yet. Blackberry & Nokia were battling to be the king of mobile. And here we are, the seasons of tech progress have come circling back again. This time the code is isomorphic, JavaScript is not something to be avoided, & sites are lighter to work on browsers on a mobile phone (no iPhone back then). Amazing!

I remember pitching to Yehuda Katz & Carl Lerche @ Engine Yard HQ a RPC library based on RJS & jQuery early 2010 to integrate server & client side code. I was quite proud of it. Yehuda was not interested as they were instead working on this client side rendering library...which I believe became EmberJS & were more interested in client/server at the time.

I'm certain there are many stories of similar patterns of evolution playing out in the history of tech. History rhymes...yada yada

Collapse
 
ingosteinke profile image
Ingo Steinke

Thanks for the brilliant post!
Multiple page applications don't need React for the single page feeling either. If you just want to prevent reloading the same header, navigation etc. for each page change, that's what AJAX did in Web 2.0 and it still does. Take a small utility like pjax and you get the same page speed improvement while still providing accessible and SEO-friendly pages each of which remains a complete landing page out of the box.

Collapse
 
ryansolid profile image
Ryan Carniato

Yes pjax seems to be an example of what I was calling HTML frame above. Thanks for sharing.

Collapse
 
dannydevs profile image
Daniel Ahn

Thank you Ingo, this answered my question of, can MPA's provide that SPA native app feeling w/ navigation? Would you say that utilities like pjax or what Ryan mentioned (HTML frames), do exactly that? Or is it more complicated, and there are pros/cons? Personally I think it would be cool if things migrated more back to SSR versus CSR due to performance issues, but IMO the 'native feeling' is quite important...

Collapse
 
ingosteinke profile image
Ingo Steinke

I find it hard to see the pro's of (over)using React (etc.) for everything that just used to work even 20 years ago. Make pages load quickly by optimizing both frontend and backend for page speed (use caching, reverse proxy etc. optimize database queries, avoid unnecessary requests). React's new server side rendering features add a lot of complexity for the sake of what? The pre-rendered pages have to be "hydrated" with the client-side JavaScript application anyway so we get a good time to first byte, but a very bad time to interactive score.
But to answer your actual question: there are (and have been) ways to achieve the native feeling with very few JavaScript. Basically you intercept full page / document requests and load partial content and update the DOM using JavaScript, which makes it easy to use CSS transitions, loading animations and the like.

Collapse
 
matrium0 profile image
matrium0

Cool article!

I disagree on some points though:
While a simple page CAN be as simple as "HTML, CSS and a little JS" it's usually not that simple, unless you want to write JS like it's 1998 or ignore a big chunk of users (outdated browsers).

If you want to write modern JS with good browser support you will at least need Babel or other ways to use polyfills.

Want to use SAAS or LESS? - you probably need some sort of build pipeline.

Also SPAs handle bundling very efficient. It hits your index.html every time of course, but on returning visits usually 80-100% of the bundles will already be cached, making for a blazingly fast experience that is unachievable with pure server-side-rendering

There are more cases and it just adds up. You can implement all that yourself of course - but in the end, when i think about what I'd have to do to all by myself to come to a satisfying solution I end up creating yet another framework.

SPAs are overkill for static content pages for sure, but they are such a time safer for real applications

Collapse
 
dannydevs profile image
Daniel Ahn

thank you, i appreciate your perspective. like Ryan said, maybe Rich Harris' termed transitional apps will hit a sweet spot that will encompass a big portion of websites.

Collapse
 
jensneuse profile image
Jens Neuse

I really enjoyed reading your post. It's interesting that you mention "Server-Side routing" as a trend and then bring up GraphQL as one of the issues for a large bundle size.

Coincidentally, we've developed a solution (open source soon) to put GraphQL on the server and expose GraphQL Operations dynamically as a REST API. This way, the client is only about 2kb in size and can leverage http layer caching. I'm curious what you think about the concept. Here's a link on the topic, it's a bit over the top but maybe. ;-) wundergraph.com/blog/the_most_powe...

Collapse
 
mathieuhuot profile image
Mathieu Huot

Always enjoy your articles, thanks for sharing. I find the mental model of tools like Astro or Slinkity or even Eleventy + Alpine + vanilla easier to grasp and generally more flexible and more transparent. It's probably because of the clear separation between the client side and the server side code.

Collapse
 
ryansolid profile image
Ryan Carniato

Those are all MPAs like the first category. For those used to the power of SPAs it is a bit harder to just give up those benefits. However, the way things are looking we might be able to get the benefit of both soon enough.

Collapse
 
andrewbaisden profile image
Andrew Baisden

Great article full of knowledge.

Collapse
 
nikosvg profile image
nikosv

I'm not a front end dev,so some questions might be a bit naive.

1.So for example isn't SSR keeping the state of the page on the server ,that being a disadvantage to scaling?

2.Isn't SSR a way to not use Javascript on the front end but write the client code in your backend language of choice?See Vaadin which you write the client components in Java.

"Vaadin in its beginnings used GWT under the hood to compile the Java code to JavaScript. Not anymore.Since version 10 it compiles to Web Components and through its Client side JavaScript engine communicates the events taking place browser side to the server side which decides how to proceed, in for example modifying the DOM;that is Vaadin renders the application server side and this server side endeavor goes by the name of Vaadin Flow"

Collapse
 
ryansolid profile image
Ryan Carniato
  1. Not necessarily. Most SSR in modern frameworks is only an initial state and then browser maintains state after. This article is talking about moving to a more server driven model but it doesn't necessarily means the backend needs to be stateful. I mean obviously it needs to go somewhere, the URL can only hold so much. But what is being looked at now in JS land is the ability to do classic server style navigation/rendering but preserve the state in the browser.

  2. I mean that is one take on SSR. JS frameworks have been doing SSR for the past several years using Node backends to run the same code client and server. The benefit is similar to this promise from Vaadin where you write a single app and it works in both places. The one thing though that I've observed is that many times using Server technology to generate JavaScript doesn't end up with as optimal JS. I've used those solutions in the past. I'm sure a backend dev can point out where node is inefficient, but browser is a much more resource constrained environment.

Hmm.. not sure what the Vaadin pitch is.. Web Components are JavaScript. But following along it sounds like it's doing HTML partials. Which isn't unlike HTML frames I'm talking about above. It's an interesting approach. The challenge with these approaches is while they do allow for less JS and a single app mentality, they rely a lot on the network for things that may not need to go the server. Generally client side updates are faster once you are there. It's a cool model, but ideally I think the direction is more hybrid. Something to get the advantages of both sides. Some work is better done in the browser without server back and forth, but if going to the server anyway better to offload that work, and reduce JS size.

Some comments may only be visible to logged-in visitors. Sign in to view all comments. Some comments have been hidden by the post's author - find out more