DEV Community

Cover image for Your SSR is slow & your devtools are lying to you
Michael Rawlings
Michael Rawlings

Posted on

Your SSR is slow & your devtools are lying to you

As developers we want our sites to be fast, and it takes many small wins to add up to a performant site.

I want to talk specifically about two performance factors, and how your devtools might mislead you to believe they're not worth pursuing, leaving your users with a slower experience. Those two factors are rendering and streaming.


Rendering

Let's start with rendering. The reality is, many of us are building websites using tools that are primarily focused on client-side updates. It's typically easiest for these tools to replicate the browser environment on the server to generate the initial HTML, so that's what many of them do - whether it's a full-blown headless browser, jsdom, or a virtual dom.

On the lighter end of this spectrum (vdom), the performance is typically considered "good enough", where that's often tens of milliseconds, compared to a purpose-built, string-based HTML renderer that may be more like 1ms.

These same frameworks also perform a process called "hydration" which typically involves serializing a lot of data down to the browser to make the page interactive. This serialization process consumes valuable cpu time and further delays the response.

Okay, but does it really matter if your page takes an extra 50ms to load? Maybe not. But what about concurrent requests? See, rendering is a cpu-bound (blocking) task: if rendering takes 50ms and 10 requests come in at roughly the same time (to the same render process), the 10th is left waiting for 450ms before it can start doing its work and respond. Looking at the response time of a single request isn't giving you the full picture.

blocking render


Streaming

Next up, streaming. Specifically, early flushing of HTML before we have all the data necessary to render the entire page. If you don't stream, your page is going to be as slow as your slowest upstream dependency. Streaming decouples your Time to First Byte (TTFB) from your data sources and enables the browser to start rendering and fetching known resources earlier. Depending on the speed of your upstream data sources, this can have a significant impact.

It doesn't only affect your TTFB but hastens First Contentful Paint (FCP) allowing the earlier display of available content and loading indicators. And depending on how broken up the page is, it also allows hydration to occur earlier and piecewise. Ultimately streaming can have a positive impact on Time to Interactive (TTI) as well.

Even if your data sources are pretty fast, at worst your content ultimately reaches the end-user at the same time. But when your API, database, or network hits an outlier on latency, streaming has you covered.

streaming render


Emulating Slowdown in Devtools

If you're testing performance, you typically want to understand the worst-case scenario. Everything's going to look fast for the person on the Mac Studio M1 Ultra with 10 Gigabit Ethernet. No, you want to understand how your site feels for the person on an older android device over a spotty cellular network. And that last part, the slow network, is where we run into trouble.

The way devtools emulates slow networks hides the impact of server-originated delays. If we dig into what the presets, such as "Slow 3G" and "Fast 3G" are doing we can see why:

network throttling

You'll see here there is a "latency" setting, which ensures the request takes at least that long, but...

DevTools doesn’t add an extra delay if the specified latency is already met.
- Matt Zeunert, DebugBear

What? So on "Slow 3G" where the latency is 2s, that means whether the server responds instantly or takes the full 2 seconds to respond, devtools shows the same timing for those requests? Yup.

You can try it yourself. Take a look at the timing for these two pages without devtools network throttling and then with "Slow 3G":


Takeaways

You'll notice I didn't include any hard numbers in here. Things like server architecture will make these factors more or less relevant. Do your own testing on real devices & networks. Even more so, look at what your actual users are experiencing—especially at the long tail.

It doesn't help that we're often locked into a certain class of SSR performance before we ever get to the stage of testing these things. If you've built your app using one of the afore-mentioned client-focused tools, you may have to reconsider that decision or hope you can find enough wins elsewhere.

While there may be other factors impacting your site's performance, making your server respond faster is only going to improve things. And don't let your devtools fool you: if something's slower over a fast network, it's going to be slower over a slow network as well.

Latest comments (25)

Collapse
 
tleperou profile image
Thomas Lepérou

Couldn't help myself to share this more-than-promising approach of serving websites and web apps: github.com/BuilderIO/qwik

Those are assumptions which serve pretty well your demonstration x))

where that's often tens of milliseconds, compared to a purpose-built, string-based HTML renderer that may be more like 1ms.

&

if rendering takes 50ms and 10 requests come in at roughly the same time (to the same render process), the 10th is left waiting for 450ms

I'll appreciate any additional references, provide great web experiences to users is such an exciting topic!

thanks for sharing

Collapse
 
cednore profile image
cednore

The original nature of a webpage is to be server-side-rendered. Evolution of javascript turned a simple html document viewer as an all-in-one OS-like environment.

Collapse
 
qm3ster profile image
Mihail Malo

Please beat offline-first ServiceWorker-cached application shells or even static HTML+JS on a local CDN with a cgi page halfway across the globe.

Collapse
 
vargaendre profile image
Endre Varga

Great article, thanks.
I am wondering if HTML streaming makes sense with static HTML files? So, let's assume I create a staticly generated site with Astro.build and host those pages on AWS with CloudFront (CDN). Is it possible to stream those HTML files? Is that even improving anything? Or because the files are premade, there's nothing to stream?

Collapse
 
jwise7 profile image
jwise7

"DevTools doesn’t add an extra delay if the specified latency is already met." I wonder why that decision was made.

Collapse
 
didof profile image
Francesco Di Donato

Excellent explanation. I do not fancy SSR except if it is really needed. Less is more, and always will be :)

Collapse
 
peerreynders profile image
peerreynders

That is missing the point.

From the takeaways:

If you've built your app using one of the aforementioned client-focused tools, you may have to reconsider that decision

Quote:

"Gen 2 SSR often results in an increase of the overall latency to make the UI interactive (able to respond to users input)"

"we are entering the era where frontend development will shift away from the client-side, and lean much more heavily on server-first architectures."

i.e. slapping SSR on a client-side rendered framework can only do so much. For more significant improvements a different approach is necessary.

This is against the background of Marko being a server-first architecture that aims to provide a single app authoring experience that is typically associated with client-side rendered frameworks (i.e. no "one app for the cost of two" development effort).

Collapse
 
rxliuli profile image
rxliuli • Edited

I use preact instead of react for small applications to control the overall size of the bundle to improve performance, rather than using more complex build tools or other ssr frameworks for optimization, an example is our personal website. (of course, there is also a reason that it is not complicated and does not require the use of huge ui frameworks or various third-party libraries)

rxliuli.com/

Collapse
 
mxdpeep profile image
Filip Oščádal

you should really start using PWA

Collapse
 
bertmeeuws profile image
Bert Meeuws • Edited

I don't see a reason why you would use a framework for this site. This is a one pager

Collapse
 
rxliuli profile image
rxliuli • Edited

In short, I don't want to not use a framework at all, I need jsx to split the page into components, and writing html/css/js purely naked is hard to get used to now. . . In addition, the js bundle is mainly stuck on markedjs, which accounts for 70k of the entire bundle size, and is a cpu-intensive function.

stats.html: file.io/ziBNipcv9Pzd

Thread Thread
 
pavelloz profile image
Paweł Kowalski • Edited

JSX to split page into components? Im pretty sure every templating language has it (ie liquid) and framework is not necessary to do it.

I would go as far as say that 11ty with some liquid is good enough for that and has 0 js served by default.

Collapse
 
grahamthedev profile image
GrahamTheDev • Edited

You need to run the mobile page speed test not the desktop one, the reduced CPU power and increased latency that a mobile user may experience is the bit you need to worry about and the mobile test accounts for this with network throttling and CPU slow down.

If you go to page speed insights it will show mobile by default, you still score well 81 / 100 FYI just thought I would give you a heads up as your blocking time from JS is high and could be an easy fix 👍❤️

Collapse
 
rxliuli profile image
rxliuli • Edited

Thanks for the reminder, I simply did some optimizations and it should be better now. The long blocking time of js seems to be caused by markedjs parsing and rendering markdown. Is there a smaller library recommended?

Thread Thread
 
grahamthedev profile image
GrahamTheDev

That is better.

Sadly I am an old school "render it on the server" type of person so I have no recommendations, but perhaps there is a way to "chunk" the page and only render the Markdown that appears above the fold (content visible without scrolling) first and then do the rest in small chunks. That way you won't block the main thread for too long? Just an idea, it might be a nightmare to implement depending on the library you use and your setup.

Either way, 250ms saved on TBT is not to be sniffed at, that is great!

Thread Thread
 
rxliuli profile image
rxliuli

Eventually, I gave up looking for a smaller library of markdown parsers, and instead converted markdown to html during the build phase to avoid the performance penalty of runtime parsing, and it should perform pretty well on mobile as well.

Related plugins: npmjs.com/package/vite-plugin-mark...

Thread Thread
 
grahamthedev profile image
GrahamTheDev

Yeah SSR is the way forward for anything like this, great work hitting that magic 100! It took me longer than 3 hours to fix mine put it like that!

Thread Thread
 
peerreynders profile image
peerreynders

and instead converted markdown to html during the build phase to avoid the performance penalty of runtime parsing

With Astro you should be able to minimize (or delay) the component JS sent to the client to only what's necessary for interactivity (though the migration would be quite a bit more effort).

For the time being Astro considers itself in early beta (0.25.0) focusing on SSG, expanding later to SSR (one experience report).

Thread Thread
 
rxliuli profile image
rxliuli
Collapse
 
darkeye123 profile image
Matej Leško

great article :)

Collapse
 
liftoffstudios profile image
Liftoff Studios

Beautiful Article !
I don't like SSR lol

Collapse
 
jamesvanderpump profile image
James Vanderpump

Like it or hate it, if you want to rank high in Google you better use SSR. Sure Google can parse a client side generated page, but will do so with more errors and a lower priority.

Collapse
 
liftoffstudios profile image
Liftoff Studios

Dude it's just my preference lol
Why do you wanna jump on and pick a fight 😆

Thread Thread
 
jamesvanderpump profile image
James Vanderpump

No fighting! Just a mention where SSR can be a necessity,

Thread Thread
 
thesureshg profile image
Suresh Kumar Gondi • Edited

That's totally depends on the tech stacks they are using :) Not every site needs SSR...as there are SSG too!