DEV Community

Cover image for How to make your app indefinitely lazy – Part 4: Preload in Advance
Aleksandrovich Dmitrii
Aleksandrovich Dmitrii

Posted on • Edited on

How to make your app indefinitely lazy – Part 4: Preload in Advance

Well, hello there! And welcome to part 4 of my ultimate guide! Brace yourself, because you are about to become a real pro.

⏱️ Reading time: ~15-22 minutes
🎓 Level: Intermediate+

Series Contents:

  1. How to make your app indefinitely lazy – Part 1: Why lazy loading is important
  2. How to make your app indefinitely lazy – Part 2: Dependency Graphs
  3. How to make your app indefinitely lazy – Part 3: Vendors and Cache
  4. How to make your app indefinitely lazy – Part 4: Preload in Advance

Earlier we talked about how to improve our project cacheability and correctly load vendors. And in this article, we will cover the following:

  • How we can utilize prefetching/preloading strategies, including:
    • What Webpack magic comments are and how they can help us;
    • What is speculative or manual prefetching and how to use it;
  • How can we request data from server without waiting for our static files to be downloaded.
  • As well as what third-party or our own solutions can be used for that.

Utilize magic comments

By now, we should have a good understanding of how to split our chunks efficiently and optimize the size and number of loaded files. But does it mean we optimized loading time to the maximum? Not quite. There's still room for improvement.

Imagine this scenario: you've split your app as much as possible, optimized your dependency tree, and separated your vendors. You open your website, and the initial loading time is significantly reduced, this is great! But when you navigate to a lazy-loaded page, the browser still takes some time to load it.

ℹ️ Lazy loading simply postpones the latency of loading the application.

  • If a component isn't used, we save time.
  • But if it is used, the latency happens somewhen later.

Despite this disadvantage, lazy loading is still a good optimization strategy for all the reasons I've already mentioned in this series. But we can make this strategy even better if we eliminate this delay. In order to do that, we can preload or prefetch our lazy chunks. And one of the ways to do that is utilizing Webpack's magic comments.

What are those? Throughout the article, you might have noticed that Webpack generated files named "Chapter1.chunk.js" and "Chapter2.chunk.js". These "Chapter1" and "Chapter2" bits were generated solely due to the reason I used a special "magic" comment - webpackChunkName. It doesn't affect the actual logic of our application, and it only affects the names of generated files.

const Chapter2 = React.lazy(
  () => import(/* webpackChunkName: "Chapter2" */ './pages/chapter-2/Chapter2')
);
Enter fullscreen mode Exit fullscreen mode

There are actually a lot of magic comments and I recommend you take some time to investigate them in Webpack's documentation. But for the sake of this article, we will be regarding only webpackPrefetch and webpackPreload. Both of these comments will affect our assembly in a way so javascript files will automatically add <link rel="prefetch"> (or "preload") elements to the document.

const Chapter2  = React.lazy(
  () => import(/* webpackPrefetch: true */ './pages/chapter-2/Chapter2')
);
Enter fullscreen mode Exit fullscreen mode

When we use these comments, we tell the browser that we want to preload chunks of our application, even if they are not currently executed. And there is some difference between using prefetch and preload.

Prefetch features

When using prefetch, the browser will request our files with the lowest priority. It will try to request these files in idle mode, i.e. when other files are not being downloaded. Also, the browser will not waste resources on parsing these files. After downloading the file using prefetch, the browser will save it to the cache.

.

If we add webpackPrefetch for Chapter2 and open Chapter1, we will see that the browser will download chunks to display both of these pages. However, prefetched files will have a text/javascript type, not script. This is due to the fact that the browser is not trying to parse these files.

If we have already preloaded files using prefetch, and we are trying to open the Chapter2 page, the browser will make another request for it to download Chapter2 files. However, this time the files will be taken from the cache. And the browser will only spend time parsing these files.

.

Thus, we have eliminated the very delay that we discussed at the beginning of the article.

You may also have noticed that Chapter 2 is mostly loaded at the same time as Chapter 1, which may be perceived as a violation of the "idle-only request". However, this is not the case. The browser is capable of downloading multiple files at the same time. And when it sees that it has the capacity, it can start preloading the file even if other files are already loaded.

Preload features

The situation with preload is slightly different:

  • The browser will download files with high priority. In the previous screenshots, you can see that by default, executable files are downloaded with a low priority, so the browser may prioritize preload files over executable ones.
  • Also, preloaded files will be downloaded with the script type, i.e. the browser will parse them immediately.
  • However, webpackPreload does not work when using it in entry files. But we won't go into that in detail in this article.

.

Using prefetch and preload can significantly improve the performance of our website, and at the same time, the UX. However, incorrect configuration of these directives can also lead to performance degradation.

The topic of using preload and prefetch is relatively extensive in itself. To fully understand the risks of misapplication of these directives, you need to:

  • understand the difference between HTTP/1.1, HTTP/2 and HTTP/3 protocols;
  • understand how the resource manager works in browsers and understand how they download files with different priorities (lowest, low, high, highest);
  • in high-loaded projects, it is also necessary to take into account the volume of traffic and the cost of using CDN services.
  • and also understand some other minor aspects.

In this article, we will not go into the details of these directives. However, I plan to write a separate article covering all these issues later. And I'll attach a link to it in this block when I do that. But still, I will slightly touch on the topic of what can be dangerous with excessive use of prefetch and preload.

In the Webpack documentation, the official recommendation is to use prefetch and preload exclusively for critical resources of our projects. For example, we can use prefetch for frequently used lazy pages, but we shouldn't use it for rarely used modal windows.

Excessive use of preload

The worst thing we can do is to configure the use of preload incorrectly. As I mentioned, preloaded scripts are loaded with high priority, while executable scripts are loaded with low priority by default. This means that by default, the browser prioritizes downloading the preloaded file over the executable ones.

Here is an example of how the browser will download files if we open the Chapter1 page, but use preload for the Chapter2 page. Despite the fact that chapter1.chunk.js is needed to display the page, the browser will download chapter2.chunk.js first, and only then will the download of chapter1.chunk.js begin. Thus, we added the use of preload to our site, but significantly degraded the loading speed for the Chapter1 page.

.

To give you a real world case of this problem, I'll tell you about one of my projects. I was working on a micro frontend application in which multiple applications had to be displayed on one page. Some MFE applications were less important than others in terms of business value. But because one less important application incorrectly configured its preload, the browser first performed 120 requests (yes, all with preload) and downloaded 11 MB of files before starting to download the more important application. This led to significant delays for users with poor internet connections, especially when they were using a VPN.

Of course, my case looks like an exaggeration beyond the pale, and in most projects such serious problems cannot arise. But, as you have seen, even in a tiny pet project, we managed to decrease performance by being careless.

Excessive use of prefetch

With prefetch, everything is not so terrible in terms of the loading time. Since the priority of downloading prefetched files is lowest, the browser will prioritize executable files over prefetch files. However, due to the presence of a resource manager in the browser, even the lowest-priority files may slightly interfere with downloading low-priority files.

The key difference between preload and prefetch is that in preload, the browser first downloads absolutely all the files that we specified to preload. And in prefetch, if some files have already started downloading, the browser will download them first, then download the executable files, and then it can continue prefetching the remaining prefetch files.

However, again, the download speed is only part of the problem. Therefore, try to be careful when using both prefetch and preload.

📌 Use Webpack prefetching and downloading for mission-critical chunks of your application. But do not overdo it, so as not to lower the performance of your site.


Speculative/Manual preloading

However, we can still improve it even further. First of all, even if we do use Webpack's prefetch, React will still briefly show Suspense's fallback while retrieving files from cache and parsing them. Just for a tiny bit of moment. This delay is not significant, but the visual * blink * is not ideal UX-wise. On top of that, we still need some strategy for non-critical chunks.

What we can do is try to predict (or speculate) when a component will be used based on user's actions and download it manually. And we can be really creative about it. Here's what I do in my current project:

  • When a user hovers their mouse over a link, we manually preload all the files for displaying the page this link leads to.
    • As a matter of fact, this is the most basic strategy of speculative preloading, and some frameworks (e.g. NextJS) provide such an optimization by default.
  • When a user hovers over a button, or focuses on it with a keyboard, a modal window or a side sheet might be downloaded.
  • Some elements are loaded based on scrolling and observing when the elements enter the visible part of the site for the user.
  • Also, on my website there's a search bar, which leads to different pages, depending on the request the user provides. And while the user specifies their request, I already try to download the page the user might be interested in.
  • And there's more. I hope you got the idea.

The only drawback of this approach is that: if we consider using speculative manual preloading, we need to preload every single component manually. Which adds additional time into the development process. But don't be scared of it. Most of speculative preloading scenarios can be automized. And those that can't are still easy to introduce and maintain.

The solution is easy

Here's a HOC we can use instead of React.lazy. It is very short: just 20 lines of code. Mostly this code is just a wrapper around lazy, and it only provides the new static preload method. And this small piece of code is already sufficient for you to start using speculative preloading.

lazyWithPreloading.tsx
export type LazyPreloadableComponent<T> = NamedExoticComponent<T> & {
  preload: () => Promise<void>;
};

export const lazyWithPreload = <T,>(
  request: () => Promise<{ default: ComponentType<T> }>,
  config: TConfig = {},
): LazyPreloadableComponent<T> => {
  const ReactLazyComponent = lazy(request);
  let PreloadedComponent: ComponentType<T> | undefined;

  const Component = memo((props: T) => {
    const ComponentToRender = useRef(PreloadedComponent ?? ReactLazyComponent).current;
    return <ComponentToRender {...(props as any)} />;
  }) as unknown as LazyPreloadableComponent<T>;

  Component.preload = async () => {
    await request().then((module) => {
      PreloadedComponent = module.default;
    });
  };

  return Component;
};
Enter fullscreen mode Exit fullscreen mode

Recently, I wasn't working with SSR applications much, so I don't feel any need to use full-fledged third-party solutions to load my components lazily. But if you do have an SSR application or you just don't want to copy a paste this 20-line-long piece of code, you can use a third-party solution. Such are @loadable/component or just react-lazy-with-preload.

And this is how could use it in App.tsx:

App.tsx
const Chapter2 = lazyWithPreload(
  () => import(
    /* webpackChunkName: "Chapter2" */
    /* webpackPrefetch: true */
    './pages/chapter-2/Chapter2'
  )
);

const Chapter1 = lazyWithPreload(
  () => import(/* webpackChunkName: "Chapter1" */ './pages/chapter-1/Chapter1')
);

export const App = () => (
  <HashRouter>
    <span className="loaded-at">
      Loading time: {loadedAt}ms
    </span>

    <nav className="navigation">
      <ul>
        <li><Link to="/">Title</Link></li>
        <li><Link to="/chapter-1" onMouseMove={() => Chapter1.preload()}>Chapter 1</Link></li>
        <li><Link to="/chapter-2" onMouseMove={() => Chapter2.preload()}>Chapter 2</Link></li>
      </ul>
    </nav>

    <Suspense fallback="Loading main...">
      <div className="book-grid">
        <Routes>
          <Route path="/" element={<Title />} />
          <Route path="/chapter-1" element={<Chapter1 />} />
          <Route path="/chapter-2" element={<Chapter2 />} />
        </Routes>
      </div>
    </Suspense>
  </HashRouter>
);
Enter fullscreen mode Exit fullscreen mode

Now, whenever a user hovers over any of these links, the page will be downloaded manually. And when a user clicks on the link, there's a good chance that the page will be displayed instantly.

📌 Try preloading your lazy components manually, based on user interactions.

If we use speculative preloading, it does not mean that we should abandon the use of prefetch. These strategies complement each other. However, considering how dangerous preload can be, I personally try to avoid using it.

ℹ️ With a combination of Webpack's prefetch and manual preloading, we can seemingly display our components like they are not lazy at all, which resolved the only drawback lazy loading has.

How to reduce manual labour

As I already mentioned, we can automize some of the speculative preloading scenarios. For example, buttons that open modal windows or links that open pages may appear anywhere in the app. And instead of manually add the call of preload each time we use these components, we can conceal such logic inside these components.

export const ButtonWithModal = (props) => {
  const [isOpen, setIsOpen] = useState(false);

  return (
     <>
       <Button
         onMouseOver={() => ModalLazy.preload()}
         onFocus={() => ModalLazy.preload()}
         onClick={() => setIsOpen(true)}
         {...props}
       >
         Open window
       </Button>

       {isOpen && <ModalLazy {...} />)
     </>
  );
};

import { RoutePathToComponentMap } from '../somewhere';

export const Link = (props) => {
  const page = RoutePathToComponentMap[props.href];

  return (
    <a
      {...props}
      onMouseOver={() => page?.preload()}
      onFocus={() => page?.preload()}
    />
  )
};
Enter fullscreen mode Exit fullscreen mode

Preload the data, not just files

So far, we've been discussing how to optimize lazy loading of static files. But real applications usually also rely on server data. In the component approach, it is considered good practice to call the API from the component which needs it. Like this:

Chapter1.tsx + API request
export default () => {
  const { data, isLoading } = useApi<{}, { ping: 'pong' }>('POST', '/api/data');

  return (
    <>
      <section className="page">
        <h2 style={{ margin: 'auto' }}>Chapter 2</h2>
      </section>

      {isLoading ? <section className="page">Loading...</section> : <Content data={data} />}
    </>
  );
}
Enter fullscreen mode Exit fullscreen mode

It is an okay approach, although it has a major flaw. The data will be requested only after the chunks for displaying the page are downloaded.

.

However, we can do better. We can request data without waiting for the lazy chunk. And to make it possible, the code responsible for the API request must be stored in the initial chunk. It's easy. Just instead of calling the API in Chapter1.tsx, we can make this component to take the data as a prop.

Chapter1.tsx + API data as a prop
type Props = {
  requestData: { data?: { ping: 'pong' }, isLoading: boolean };
}

export default ({ requestData }: Props) => {
  const { data, isLoading } = requestData;

  return (
    <>
      <section className="page">
        <h2 style={{ margin: 'auto' }}>Chapter 1</h2>
      </section>

      {isLoading ? <section className="page">Loading...</section> : <Content data={data} />}
    </>
  );
}
Enter fullscreen mode Exit fullscreen mode

You may have noticed that the content of Chapter1.tsx stayed almost the same: we changed only 1 line of code, so such the approach is mostly harmless DX-wise. The only question is where should we request data then.

We could request it in the App component, but it would be a violation of the component approach, which we also don't want to do. But what can we do in this case?

And there are actually various ways to request it. I like to use my own solution: a wrapper on top of lazyWithPreload, which is itself is wrapper on top of React.lazy.

Own solution HOC: lazyWithPreloadAndPrefetch.tsx
export const lazyWithPreloadAndPrefetch = <T, Props>(
  request: () => Promise<{ default: ComponentType<T> }>,
  {
    usePrefetch,
    ...config
  }: TConfig & {
    usePrefetch: (props: Props) => T;
  },
): LazyPreloadableComponent<Props> => {
  const LazyComponent = lazyWithPreload(request, config);

  const Component = memo((props: Props) => {
    // eslint-disable-next-line @typescript-eslint/no-explicit-any
    return <LazyComponent {...(usePrefetch(props) as any)} />;
  }) as unknown as LazyPreloadableComponent<unknown>;

  Component.preload = LazyComponent.preload;

  return Component;
};
Enter fullscreen mode Exit fullscreen mode

I couldn't find a convenient third-party solution for Client Rendered Applications. However, there are solutions for loading data on the server side using SSR or React Server Components. And I'd like to talk about them too, they're not really about lazy loading, and they're going to inflate the series even more. But for the sake of decency, I'll give you an example of how to preload data using @loadable/component.

const Chapter1Lazy = lazyWithPreloadAndPrefetch(
  () => import(/* webpackChunkName: "Chapter1" */ /* webpackPrefetch: true */ './pages/chapter-1/Chapter1'), {
    usePrefetch: () => {
      const requestData = useApi<{}, { ping: 'pong' }>('POST', '/api/data');

      return { requestData };
    },
  });

// or

import loadable from '@loadable/component';

const Chapter1Lazy = loadable(() =>
  Promise.all([
    import(/* webpackChunkName: "Chapter1" */ /* webpackPrefetch: true */ './pages/chapter-1/Chapter1'),
    request<{}, { ping: 'pong' }>('POST', '/api/data'),
  ]).then(([Chapter1, data]) => {
    const Chapter1 = Chapter1.default;

    return function () {
      return <ComponentA data={data} />;
    };
  }),
  {
    fallback: <div>Loading components and data...</div>,
  }
);
Enter fullscreen mode Exit fullscreen mode

This way, we also managed not to violate the component approach. Even though, the request logic is not stored in the lazy component, it's still a part of Chapter1Lazy. And the data will be requested only when the component is rendered, i.e. only when the data is actually needed.

And now, our waterfall looks like this. Noticed that data is requested almost at the same time with lazy chunks? This way we download data from the server while our lazy chunks are being downloaded. The data is requested about 200 ms earlier, therefore it's displayed about 200 ms earlier. A win.

.

Although, one might object that this approach pushes data-fetching code into the initial bundle, which increases its size and hurts UX by slowing down the initial load. And that's fair. But if you keep the request logic as lean as possible, the extra weight will be negligible — while the overall loading experience will improve significantly.

And this approach won't worsen cacheability either, because initial javascript files always lose their cache regardless of what changes we make.

📌 Try to request data from the server without waiting for your lazy chunk to be downloaded.

A real life example

This approach is not limited to preloading data for displaying pages only. We can be creative about it. Let me show you how I utilized this approach in my current project. This is a simplified code of one of the pages:

export default ({ roomID }: { roomID: string }) => {
  const { data, isLoading } = useApi('POST', '/api/room', {
    params: { roomID },
  });

  return (
    <>
      <section className="left">
        <VideoPlayer data={data} isLoading={isLoading} />
      </section>

      <div className="right">
        {data && (
          <>
            <Card1Lazy room={data} roomID={roomID} />
            <Card2Lazy room={data} roomID={roomID} />
            <Card3Lazy room={data} roomID={roomID} />
          </>
        )}
      </div>
    </>
  );
}
Enter fullscreen mode Exit fullscreen mode

And each card used to look like this:

export default ({ room, roomID }: { room: RoomData, roomID: string }) => {
  const { data, isLoading } = useApi('POST', '/api/card/1', {
    params: { roomID },
  });

  return (...)
};
Enter fullscreen mode Exit fullscreen mode

The overall timeline of rendering these cards looked this way:

.

  1. The browser downloads initial JavaScript chunks
  2. The browser downloads lazy chunks for this particular page
  3. The browser requests data about a livestream room and waits until it gets the response.
    • Each of the cards needs room data to render UI but not to request data. However, since the component cannot be rendered without room data, no card data is requested.
  4. Then, having received the data about the room, the cards begin to be drawn, and we upload them to their lazy chunks.
  5. Then, for each of the cards, we request data.

To display a card, users had to wait between 2.8 and 4.1 seconds, depending on card API latency.

But here's what I did:

  1. Started requesting room data together with loading page lazy chunks
  2. Started requesting card data right after page lazy chunks are loaded. Even though we can't render these cards, we should be able to request data.

.

Now, each of the cards wait for 3 parallel processes: livestream data, card API data, and card static files. Unless either of those 3 is not downloaded, the fallback is displayed. I parallelized the loading time as much as possible, and managed to reduce the display time for each of the card by approximately 1.4 seconds. Plus, video player now is displayed 0.2 seconds faster as well.

📌 Be creative.


Conclusion

Thank you for joining me once again on our journey to make our web applications indefinitely lazy. If you have any questions feel free to ask them in comments.

And to summarize this article, let's list the rules we learned today:

  • 📌 📌 Use Webpack prefetching and downloading for mission-critical chunks of your application. But do not overdo it, so as not to lower the performance of your site.
  • 📌 Try preloading your lazy components manually, based on user interactions.
  • 📌 Try to request data from the server without waiting for your lazy chunk to be downloaded.
  • 📌 And most importantly: Be creative.

A-a-and that was it. I hope some did read all the articles of this series. And I hope some of you might even catch something new from them. So, what do you think? I'm overcomplicating it, or is lazy loading more complicated than you thought? Let me know in the comments.

Here are my social links: LinkedIn Telegram GitHub. See you ✌️

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.