<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Cagdas Ucar</title>
    <description>The latest articles on DEV Community by Cagdas Ucar (@cagdas_ucar).</description>
    <link>https://dev.to/cagdas_ucar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cagdas_ucar"/>
    <language>en</language>
    <item>
      <title>Don't use CDN!</title>
      <dc:creator>Cagdas Ucar</dc:creator>
      <pubDate>Sat, 24 Dec 2022 19:01:36 +0000</pubDate>
      <link>https://dev.to/cagdas_ucar/dont-use-cdn-50h1</link>
      <guid>https://dev.to/cagdas_ucar/dont-use-cdn-50h1</guid>
      <description>&lt;p&gt;If you are an experienced web developer/architect, you may find the title of this article questionable. The conventional wisdom is that you SHOULD use CDNs for better performance, easy backups and many other benefits. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is a CDN?
&lt;/h2&gt;

&lt;p&gt;For people who may not be familiar with the term, let's make it clear: CDN means Content Delivery Network. The idea behind CDNs is simple. The primary customers of CDNs are content providers on the web. As a content provider you have many users around the world. Typically, the content is dynamically generated, and it will have references to static assets. There are some completely static sites where it's just hosted HTML, CSS, JS, and other static assets like libraries, fonts, videos and images but most websites will have some part of their content generated on the server. &lt;br&gt;
This is true even for single page applications. That's what we mean by dynamic content. It is the data coming from the server, usually with a database. &lt;/p&gt;

&lt;p&gt;CDNs provide a means for you to upload your static assets and have it distributed to their own servers around the globe. CDNs operate with a domain name that resolves to the closest server of the user trying to reach it. That way, users can download your static content much more efficiently from a server that is closer to them physically than your own servers, which can mean longer distance and slower download.&lt;/p&gt;

&lt;p&gt;This idea is simple, but it has an assumption that is easy to overlook: connection costs. &lt;/p&gt;

&lt;h2&gt;
  
  
  How I got to this point?
&lt;/h2&gt;

&lt;p&gt;I have built two website editing platforms in my career. In the first one, we setup a load balanced cluster. It was 2010, so we actually built the entire thing with bare metal servers and F5 load balancers. We stored the static assets in Lustre shared parallel file system and served them from the same server as the dynamic content. All that time, I could not shake the feeling that I was supposed to be using CDNs because that's the conventional wisdom, but I could not find the opportunity to implement it while I was there. &lt;/p&gt;

&lt;p&gt;In my second company (&lt;a href="https://webdigital.com" rel="noopener noreferrer"&gt;WebDigital&lt;/a&gt;), we used AWS. Determined to do things right this time, I used S3 to store digital assets. I setup Cloudfront for domain resolution, and I was expecting top-notch performance. And it was pretty good. See the page loading graph for desktop: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5oplnapw9ak72ieqijh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5oplnapw9ak72ieqijh.jpg" alt="Desktop Page Load"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then I looked at the page speed insights. Desktop looked good, but mobile was showing a warning. See below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjecvfu7u4m3urhq7h4c4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjecvfu7u4m3urhq7h4c4.jpg" alt="Page Speed Insights Slow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First contentful paint is over 3 seconds. To be clear, that's basically Google emulating a low-powered mobile phone on a 3G network. Before you say "nobody uses 3G anymore", see &lt;a href="https://www.youtube.com/watch?v=fWc3Zu6A3Ws" rel="noopener noreferrer"&gt;Surma &amp;amp; Jake Archibald's video&lt;/a&gt; explaining why it's still important to be able to handle these low-powered phones on slow networks. &lt;/p&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;I went to webpagetest.org and ran a mobile test. Here's what it looked like: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3tpgkn35wqd09hxnq0r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3tpgkn35wqd09hxnq0r.jpg" alt="Mobile 3G blocked"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are some average stats for this: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;WebDigital: 444ms DNS + 302ms connect + 341ms SSL negotiation = 1087ms&lt;/li&gt;
&lt;li&gt;AWS: 313ms DNS + 303ms connect + 379ms SSL negotiation = 995ms&lt;/li&gt;
&lt;li&gt;Google fonts: 302ms DNS + 302ms connect + 343ms SSL negotiation = 947ms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, &lt;strong&gt;the problem is that these low-powered mobile phones take more than a second to establish a connection to ANY server!!!&lt;/strong&gt; It doesn't matter if you have many servers around the globe closer to the users. The problem is &lt;strong&gt;CPU power&lt;/strong&gt;! DNS query, TCP connection and SSL handshake take more than a second, regardless of the server. Yes, AWS and Google seem to be able to respond a bit faster to DNS queries, but connection and SSL take just as long, and this is &lt;strong&gt;BEFORE we have a single byte&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;If you look at TTFB (time to first byte) on these servers, it's typically around 1.2 seconds. Think about this. Your own page comes at 1.2 seconds. At best, you will then connect to the CDN, which takes another 1.2 seconds. You end up with only 0.6 seconds left to do anything on the page, and it's not enough in most cases. &lt;/p&gt;

&lt;p&gt;And for people that will recommend techniques like lazy loading, async loading, defer, push, etc. I tried them all. It really does not change much. Most of the time, you need the static assets to be loaded for the page to be considered loaded. Here's an example of what it looks like with HTTTP2 push: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vjt44oauh010d4ljas6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vjt44oauh010d4ljas6.jpg" alt="Mobile 3G blocked with Push"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Single Server
&lt;/h2&gt;

&lt;p&gt;Here's the bottom line and performance principle learned: use single server if possible. I spent the next eight months moving S3 to EFS to eliminate the connection and added ability to download the Google fonts to it and serve it from our servers. After all the hard work, here's the stats with single server page load: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0bbytmbpxivwxwofpdx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0bbytmbpxivwxwofpdx.jpg" alt="Mobile 3G Single Server"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looks nice, huh? No more long connection wait times. Here's what it looks like on page speed insights: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5aqwgqlzqt05gpod98ws.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5aqwgqlzqt05gpod98ws.jpg" alt="Page Speed Insights Fast"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Never Use CDN?
&lt;/h2&gt;

&lt;p&gt;Note that the performance principle does NOT say: "don't use CDN" - it says use single server. For example, if you have a static website where you upload your page to the CDN along with other static assets, CDN is a great fit! That is to say there are still many cases where CDNs are useful. However, if you have a dynamic content generating application, and you want to host the static assets on a CDN, you have to consider that low-powered mobile phones will not be able to load your pages under 3 seconds. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Preact Async Rendering: Solution to Initial Render Blocking</title>
      <dc:creator>Cagdas Ucar</dc:creator>
      <pubDate>Wed, 05 Jan 2022 21:43:24 +0000</pubDate>
      <link>https://dev.to/cagdas_ucar/preact-async-rendering-51p2</link>
      <guid>https://dev.to/cagdas_ucar/preact-async-rendering-51p2</guid>
      <description>&lt;p&gt;The traditional way of building websites is now called multi-page application (MPA). In this classic mode browser makes a call to the web server to get a page. Once the page is loaded, the dependencies of the page (styles, JS, images) are then requested from the same server or supporting servers. The problem is that many of the pages share the same data and it's inefficient to re-request the same data over and over. Furthermore, MPAs cannot support transitions between pages. There is a sharp cut-off and visible loading time in most cases when switching pages. &lt;/p&gt;

&lt;p&gt;Single page applications came to existence around 2010 exactly for this reason. First frameworks were Ember, AngularJS and Backbone. All technologies take time to mature and SPA is no exception. Since the beginning traditionalists had a number of arguments against using SPA frameworks. &lt;/p&gt;

&lt;p&gt;The first argument was that it was bad for SEO and search engines would not be able to index the site properly. I actually remember discussing with a developer circa 2013 about this. I was arguing against it at the time. Those days are long gone. Google now actually encourages SPA websites.&lt;/p&gt;

&lt;p&gt;The other argument traditionalists had against SPA is complexity but that's being taken care of by many frameworks, making it easier and easier. There are thousands of hours of training materials for many frameworks. &lt;/p&gt;

&lt;p&gt;That being said, the biggest challenge modernists faced was probably the initial loading delay. SPA client side rendering takes time to initialize. During that time, the screen is either empty or just says loading or some image icon. In order to solve that problem a new technology emerged: server side rendering (SSR). In this mode, the same application is rendered only for the requested page on the server and that's sent in place of the loading screen. The client side then takes over and updates the page if needed but usually just updates the events for the SPA to work, which is called hydration. &lt;/p&gt;

&lt;h2&gt;
  
  
  Blocking Render
&lt;/h2&gt;

&lt;p&gt;It's been 12 years at this point since the initial SPA frameworks and you would think we have completed every challenge but there is one more and that's probably the biggest one: initial render blocking. You may use SSR to send the rendered page but initial client side rendering (CSR) can still take a significant amount of time. During that time, the browser will be busy and non-responsive to the user commands. It's usually pretty short (less than 300ms) but it's definitely there.&lt;/p&gt;

&lt;p&gt;Here's what it looks like on performance tab of dev tools (see the big block of 100ms render task): &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fet9r4gyavyq1w31kxbq0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fet9r4gyavyq1w31kxbq0.jpg" alt="Blocking Render"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Google created a new set of performance metrics called web vitals. They consist of 3 metrics: Largest Contentful Paint (LCP), FID (First Input Delay) and CLS (Cumulative Layout Shift). I'm not sure if web vitals already started contributing towards SEO but we all know that the day is coming soon if it's not already here. Here's the thing: First Input Delay is a big challenge for single-page applications due to the initial render blocking. You may also see a version of this metric as "total blocking time" in Lighthouse. Multi-page applications usually don't have that problem and even today many people choose the traditional way of building websites for this reason. &lt;/p&gt;

&lt;h2&gt;
  
  
  Web Workers
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/developit/preact-worker-demo" rel="noopener noreferrer"&gt;There are some documented solutions for this problem using web workers.&lt;/a&gt; Web workers run on secondary CPUs, so they are not blocking. &lt;/p&gt;

&lt;p&gt;The problem is that working with web workers is a pain. They can't change the DOM, so how can we use them for rendering? The thing is, rendering actually consists of 2 activities: "diff" and "commit". The best way would be to move the "diff" to the web worker and have it relay the commits needed to the main thread. The problem with this approach (apart from its complexity) is that the application itself ends up living in the web worker because diff also includes the application code for rendering and other events. Because the web worker is running on the secondary CPUs and in mobile devices these are slower chips, having the entire application in web worker is a non-starter in many cases. Splitting the application code to the main thread while keeping the diff in the web worker would be ideal but that would require too many communications between the main thread, which would end up making it slower. &lt;/p&gt;

&lt;h2&gt;
  
  
  How does Async Rendering work?
&lt;/h2&gt;

&lt;p&gt;The ideal solution is to break the initial rendering into little pieces. Browsers have an API for that called &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Window/requestIdleCallback" rel="noopener noreferrer"&gt;requestIdleCallback&lt;/a&gt;. The program asks: "hey browser, I need to do some work. how much time can you give me?" and the browser answers: "here you go, run for 20ms and then check with me again to get more time" and so it goes until the render is completed. This way the render is not "blocking" but &lt;a href="https://medium.com/swlh/the-advent-of-cooperative-scheduling-into-the-javascript-world-d9799dfbe2b4" rel="noopener noreferrer"&gt;"cooperative"&lt;/a&gt;. This is also known as "interruptable rendering" or "asynchronous rendering". &lt;/p&gt;

&lt;p&gt;Ideally, this should be implemented at the framework level and there are a lot of discussions but none of the SPA frameworks have a complete solution for it yet. I think it's a problem for millions of people. &lt;/p&gt;

&lt;h2&gt;
  
  
  React Async Rendering
&lt;/h2&gt;

&lt;p&gt;React did a &lt;a href="https://www.velotio.com/engineering-blog/react-fiber-algorithm" rel="noopener noreferrer"&gt;re-write in 2016&lt;/a&gt; exactly for this problem but in the end, they ended up disabling the feature because they had too many bugs. I think the main problem is that they were trying to do "concurrent rendering" where the components can be painted in different order. They are now saying &lt;a href="https://reactjs.org/blog/2021/06/08/the-plan-for-react-18.html" rel="noopener noreferrer"&gt;they will enable those features with React 18&lt;/a&gt; but I don't think it's the solution people have been waiting for. They ended up introducing breakpoints in the application via Suspense. So, the developers are supposed to determine where to place breakpoints in the code to break the initial rendering. This shifts the responsibility to the web page designer who probably has no clue about what render blocking is. Nobody wants to deal with that. &lt;a href="https://medium.com/@azizhk/building-an-async-react-renderer-with-diffing-in-web-worker-f3be07f16d90" rel="noopener noreferrer"&gt;Aziz Khambati seems to have a good solution for React renderer&lt;/a&gt; but I don't think that's going to be the official release.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fine, but I Need Something Now!
&lt;/h2&gt;

&lt;p&gt;This brings us to our project. &lt;a href="https://webdigital.com" rel="noopener noreferrer"&gt;WebDigital&lt;/a&gt; is a platform that enables users to develop websites visually. That's nothing new but I think we are the only one that generates contents as single page application (SPA). The problem is that our websites were suffering from large first input delays around 300ms on mobile devices. The framework that we use is called Preact, which is compatible with React but it's a faster implementation. I'm sure somebody will implement async rendering at some point but we needed sooner than that. &lt;/p&gt;

&lt;h2&gt;
  
  
  Deep In Code
&lt;/h2&gt;

&lt;p&gt;I started looking at the source code of Preact. Render gets triggered from 2 places: initial rendering and components. Render then "diffs" and "commits" recursively. I believe this is quite common structure among many SPA frameworks. The key to breaking up the rendering is to occasionally check with the browser using requestIdleCallback and get a certain amount of time to execute. When we exceed that time, we need to wait until another call to requestIdleCallback returns us more time. JS developers will recognize this requires async/await. &lt;/p&gt;

&lt;p&gt;My first implementation was naïve: make all recursive routines async and await requestIdleCallback. It worked but apparently async/await performance is quite bad when you recursively call them hundreds of times. My render time went from 100ms to 400ms, not counting the breaks. &lt;/p&gt;

&lt;p&gt;In order to solve the performance problem, I decided to use generators. In this architecture, only the outermost caller (render) is an async function and it calls a generator function until it returns a Promise, which happens only when we exceed the time limit. Then, when a Promise it returned, we await until requestIdleCallback returns us more time. This still reduces the performance but not as drastically. 100ms render took around 130ms, not counting breaks. Should be acceptable. &lt;/p&gt;

&lt;p&gt;Alas, there were more hurdles to overcome. Just having async functions in the code increased Preact bundle size by 2K! For a framework claiming to be the smallest, this is not acceptable. So, I started working on a separate bundle. I had to take the "blocking" functions and turn them dynamically into "generator"/"async" functions. Due to this operation, minifier (Terser) renaming/mangling properties broke the code. So, I added certain variables that are used in async function generation as "reserved". I then created a separate bundle that contains the preact regular code as well as the async version. &lt;/p&gt;

&lt;p&gt;With this new approach, Preact core bundle size only increased by 46 bytes (minor changes and adding couple of hooks to override component rendering). The async bundle takes 6K but it should be possible to reduce it in the future. Note that we are NOT doing "concurrent rendering" where the components can be painted in different order. We are awaiting for each component render to be completed when processing the render queue. I believe this is the way to avoid bugs encountered by React team. &lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;Here are the async rendering stats (note that the big block of 100ms render task is now executed over many little tasks): &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lmk7qrqdscvyzwy58mn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lmk7qrqdscvyzwy58mn.jpg" alt="Async Render"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bear in mind that this is still under &lt;a href="https://github.com/preactjs/preact/pull/3386" rel="noopener noreferrer"&gt;review by the Preact team&lt;/a&gt; but if you need it desperately like us, feel free try out &lt;a href="https://www.npmjs.com/package/preact-async" rel="noopener noreferrer"&gt;the preact-async package on npm&lt;/a&gt;. I'm hoping that Preact team will accept this change and get it into the main package. &lt;/p&gt;

&lt;p&gt;Here's the main usage: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install preact-async instead of preact.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

npm remove preact
npm i preact-async


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Alias preact as 'preact-async'. This process may differ for different bundlers but here's how to do it for webpack: &lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resolve: {
    alias: {
        react: 'preact/compat',
        'react-dom': 'preact/compat',
        preact: 'preact-async'
    }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Due to the async nature of the module, certain variables need to remain unchanged. This list is exported from this module and can be used for minification purposes. Below is example usage in webpack. If you minify the code without these reserved tokens you will get an error.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

optimization: {
  ...
  minimize: true,
  minimizer: [ 
    new TerserPlugin({ 
      terserOptions: { 
        mangle: { 
          reserved: require('preact-async/async/reserved').minify.mangle.reserved 
        } 
      } 
    }) 
  ]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Here's the code to use it: &lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import { render, renderAsync, h } from 'preact/async';

// create main application component
const mainComponent = h(App, {});

// serial rendering - use replaceNode if using SSR
render(mainComponent, document.getElementById('root')); 

// async rendering - you can await it - use replaceNode if using SSR
renderAsync(mainComponent, document.getElementById('root-async')); 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If the environment does not support async functions/generators or running on the server, async rendering will fall back to blocking rendering. &lt;/p&gt;

&lt;h2&gt;
  
  
  Final Notes
&lt;/h2&gt;

&lt;p&gt;It's usually the initial render that's the problem but in some cases component renders may need performance optimization as well. &lt;br&gt;
renderAsync will continue to respect browser time when processing render queue but if you are using blocking rendering you can always use &lt;code&gt;options.debounceRendering = requestAnimationFrame&lt;/code&gt; for Preact. &lt;/p&gt;

&lt;p&gt;This methodology should be applicable to any framework out there. &lt;br&gt;
The basic idea is to create async/generator functions from serial functions dynamically and insert a breakpoint at the start of recursion for render. Hopefully someone will find it useful.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>react</category>
      <category>preact</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
