DEV Community

Cover image for How to make your app indefinitely lazy – Part 4: Preload in Advance
Aleksandrovich Dmitrii
Aleksandrovich Dmitrii

Posted on

How to make your app indefinitely lazy – Part 4: Preload in Advance

Well, hello there! And welcome to part 4 of my ultimate guide! Brace yourself, because you are about to become a real pro.

⏱️ Reading time: ~15-22 minutes
🎓 Level: Intermediate+

Series Contents:

  1. How to make your app indefinitely lazy – Part 1: Why lazy loading is important
  2. How to make your app indefinitely lazy – Part 2: Dependency Graphs
  3. How to make your app indefinitely lazy – Part 3: Vendors and Cache
  4. How to make your app indefinitely lazy – Part 4: Preload in Advance

Earlier we talked about how to improve our project cacheability and correctly load vendors. And in this article, we will cover the following:

  • How we can utilize prefetching/preloading strategies, including:
    • What Webpack magic comments are and how they can help us;
    • What is speculative or manual prefetching and how to use it;
  • How can we request data from server without waiting for our static files to be downloaded.
  • As well as what third-party or our own solutions can be used for that.

Utilize magic comments

By now, we should have a good understanding of how to split our chunks efficiently and optimize the size and number of loaded files. But does it mean we optimized loading time to the maximum? Not quite. There's still room for improvement.

Imagine this scenario: you've split your app as much as possible, optimized your dependency tree, and separated your vendors. You open your website, and the initial loading time is significantly reduced, this is great! But when you navigate to a lazy-loaded page, the browser still takes some time to load it.

ℹ️ Lazy loading simply postpones the latency of loading the application.

  • If a component isn't used, we save time.
  • But if it is used, the latency happens somewhen later.

Yes, it is still a good strategy due to all the reasons I mentioned in this series. But we can make it even better if we remove such a latency. In order to do that, we can preload or prefetch our lazy chunks. And one of the ways to do that is utilizing Webpack's magic comments.

What are those? Throughout the article, you might have noticed that Webpack generated files named "Chapter1.chunk.js" and "Chapter2.chunk.js". These "Chapter1" and "Chapter2" bits were generated solely due to the reason I used a special "magic" comment - webpackChunkName. It doesn't affect the actual logic of our application, and it only affects the names of generated files.

const Chapter2 = React.lazy(
  () => import(/* webpackChunkName: "Chapter2" */ './pages/chapter-2/Chapter2')
);
Enter fullscreen mode Exit fullscreen mode

There are actually a lot of magic comments and I recommend you take some time to investigate them in Webpack's documentation. But for the sake of this article, we will be regarding only webpackPrefetch and webpackPreload. Both of these comments will affect our assembly in a way so javascript files will automatically add <link rel="prefetch"> (or "preload") elements to the document.

const Chapter2  = React.lazy(
  () => import(/* webpackPrefetch: true */ './pages/chapter-2/Chapter2')
);
Enter fullscreen mode Exit fullscreen mode

And now, while a browser is idle, i.e., when it isn't busy downloading anything else, it will download JavaScript files in the background and cache them. Later, when the code is actually needed, it will be retrieved from cache, and execution will feel almost instant. And the nice part is that Webpack will also prefetch all the dependencies of that chunk.

In dev tools, utilizing Webpack's prefetching looks like this. Imagine, we added webpackPrefetch to the page Chapter2, and opened page Chapter1.

.

As you can see, the browser tries to download Chapter1 and its vendor chunk first. And later with a little delay it downloads Chapter2 with its dependencies. You may have also noticed that the type of prefetch request is different: it's "text/javascript" rather than "script".

Later, when we switch to Chapter 2, we will see that the page is opened in below 5ms. Which is a significant improvement compared to 200ms.

.

This approach is very useful. Although, the official Webpack guideline recommends using it only for critically important chunks, such as top-level pages or components that are used frequently across the app.

You may have also noticed that Chapter2 is mostly being downloaded together with Chapter1, which may be perceived as a violation of "requesting in idle only". However, it is actually not a violation. The browser is capable of downloading multiple files simultaneously. And when it sees that there are some unused "request slots", it prefetches the file.

This behavior improves website's performance. However, it can also affect it negatively when we don't use prefetch carefully. The more files to prefetch there are, the higher the chance that the browser starts using "request slots" for prefetching only while you still need to download some files to execute. It may happen if you download files to execute with a delay for some reason. The browser may fill the "slots" during this delay.

To give you a real world perspective of this problem, I'll tell you about one of my projects. I was working on a micro-frontend application where several applications had to be rendered on a single page. Understandably, some applications were less important than others in terms of business value. But since one less app didn't set up their prefetching correctly, and the more important app was displayed with a little delay, the browser ended up making 120 requests and downloading 11 MB of data before starting to download the more important app. Which caused significant delays for users with a bad internet connection, especially when they used VPN.

Of course, my case looks like an out-of-world exaggeration and most of the projects won't have such dire problems. But you should still be cautious about using Webpack's prefetch. At least because such a recommendation is within the official guideline.

📌 Use Webpack's prefetching and preloading for critically important chunks for your app.


Speculative/Manual preloading

However, we can still improve it even further. First of all, even if we do use Webpack's prefetch, React will still briefly show Suspense's fallback while retrieving files from cache and parsing them. Just for a tiny bit of moment. This delay is not significant, but the visual * blink * is not ideal UX-wise. On top of that, we still need some strategy for non-critical chunks.

What we can do is try to predict (or speculate) when a component will be used based on user's actions and download it manually. And we can be really creative about it. Here's what I do in my current project:

  • When a user hovers their mouse over a link, we manually preload all the files for displaying the page this link leads to.
    • As a matter of fact, this is the most basic strategy of speculative preloading, and some frameworks (e.g. NextJS) provide such an optimization by default.
  • When a user hovers over a button, or focuses on it with a keyboard, a modal window or a side sheet might be downloaded.
  • Some elements are downloaded based on user's scroll and intersection observation.
  • Also, on my website there's a search bar, which leads to different pages, depending on the request the user provides. And while the user specifies their request, I already try to download the page the user might be interested in.
  • And there's more. I hope you got the idea.

The only drawback of this approach is that: if we consider using speculative manual preloading, we need to preload every single component manually. Which adds additional time into the development process. But don't be scared of it. Most of speculative preloading scenarios can be automized. And those that can't are still easy to introduce and maintain.

A short Solution

Here's a HOC we can use instead of React.lazy. It is very short: just 20 lines of code. Mostly this code is just a wrapper around lazy, and it only provides the new static preload method. And this small piece of code is already sufficient for you to start using speculative preloading.

lazyWithPreloading.tsx
export type LazyPreloadableComponent<T> = NamedExoticComponent<T> & {
  preload: () => Promise<void>;
};

export const lazyWithPreload = <T,>(
  request: () => Promise<{ default: ComponentType<T> }>,
  config: TConfig = {},
): LazyPreloadableComponent<T> => {
  const ReactLazyComponent = lazy(request);
  let PreloadedComponent: ComponentType<T> | undefined;

  const Component = memo((props: T) => {
    const ComponentToRender = useRef(PreloadedComponent ?? ReactLazyComponent).current;
    return <ComponentToRender {...(props as any)} />;
  }) as unknown as LazyPreloadableComponent<T>;

  Component.preload = async () => {
    await request().then((module) => {
      PreloadedComponent = module.default;
    });
  };

  return Component;
};
Enter fullscreen mode Exit fullscreen mode

Recently, I wasn't working with SSR applications much, so I don't feel any need to use full-fledged third-party solutions to load my components lazily. But if you do have an SSR application or you just don't want to copy a paste this 20-line-long piece of code, you can use a third-party solution. Such are @loadable/component or just react-lazy-with-preload.

And this is how could use it in App.tsx:

App.tsx
const Chapter2 = lazyWithPreload(
  () => import(
    /* webpackChunkName: "Chapter2" */
    /* webpackPrefetch: true */
    './pages/chapter-2/Chapter2'
  )
);

const Chapter1 = lazyWithPreload(
  () => import(/* webpackChunkName: "Chapter1" */ './pages/chapter-1/Chapter1')
);

export const App = () => (
  <HashRouter>
    <span className="loaded-at">
      Loading time: {loadedAt}ms
    </span>

    <nav className="navigation">
      <ul>
        <li><Link to="/">Title</Link></li>
        <li><Link to="/chapter-1" onMouseMove={() => Chapter1.preload()}>Chapter 1</Link></li>
        <li><Link to="/chapter-2" onMouseMove={() => Chapter2.preload()}>Chapter 2</Link></li>
      </ul>
    </nav>

    <Suspense fallback="Loading main...">
      <div className="book-grid">
        <Routes>
          <Route path="/" element={<Title />} />
          <Route path="/chapter-1" element={<Chapter1 />} />
          <Route path="/chapter-2" element={<Chapter2 />} />
        </Routes>
      </div>
    </Suspense>
  </HashRouter>
);
Enter fullscreen mode Exit fullscreen mode

Now, whenever a user hovers over any of these links, the page will be downloaded manually. And when a user clicks on the link, there's a good chance that the page will be displayed instantly.

📌 Try preloading your lazy components manually, based on user interactions.

If we use speculative preloading, it doesn't mean we should stop using Webpack's prefetching. These strategies complement each other.

ℹ️ With a combination of Webpack's prefetch and manual preloading, we can seemingly display our components like they are not lazy at all, which resolved the only drawback lazy loading has.

How to reduce manual labour

As I already mentioned, we can automize some of the speculative preloading scenarios. For example, buttons that open modal windows or links that open pages may appear anywhere in the app. And instead of manually add the call of preload each time we use these components, we can conceal such logic inside these components.

export const ButtonWithModal = (props) => {
  const [isOpen, setIsOpen] = useState(false);

  return (
     <>
       <Button
         onMouseOver={() => ModalLazy.preload()}
         onFocus={() => ModalLazy.preload()}
         onClick={() => setIsOpen(true)}
         {...props}
       >
         Open window
       </Button>

       {isOpen && <ModalLazy {...} />)
     </>
  );
};

import { RoutePathToComponentMap } from '../somewhere';

export const Link = (props) => {
  const page = RoutePathToComponentMap[props.href];

  return (
    <a
      {...props}
      onMouseOver={() => page?.preload()}
      onFocus={() => page?.preload()}
    />
  )
};
Enter fullscreen mode Exit fullscreen mode

Preload the data, not just static files

So far, we've been discussing how to optimize lazy loading of static files. But real applications usually also rely on server data. In the component approach, it is considered good practice to call the API from the component which needs it. Like this:

Chapter1.tsx + API request
export default () => {
  const { data, isLoading } = useApi<{}, { ping: 'pong' }>('POST', '/api/data');

  return (
    <>
      <section className="page">
        <h2 style={{ margin: 'auto' }}>Chapter 2</h2>
      </section>

      {isLoading ? <section className="page">Loading...</section> : <Content data={data} />}
    </>
  );
}
Enter fullscreen mode Exit fullscreen mode

It is an okay approach, although it has a major flaw. The data will be requested only after the chunks for displaying the page are downloaded.

.

However, we can do better. We can request data without waiting for the lazy chunk. And to make it possible, the code responsible for the API request must be stored in the initial chunk. It's easy. Just instead of calling the API in Chapter1.tsx, we can make this component to take the data as a prop.

Chapter1.tsx + API data as a prop
type Props = {
  requestData: { data?: { ping: 'pong' }, isLoading: boolean };
}

export default ({ requestData }: Props) => {
  const { data, isLoading } = requestData;

  return (
    <>
      <section className="page">
        <h2 style={{ margin: 'auto' }}>Chapter 1</h2>
      </section>

      {isLoading ? <section className="page">Loading...</section> : <Content data={data} />}
    </>
  );
}
Enter fullscreen mode Exit fullscreen mode

You may have noticed that the content of Chapter1.tsx stayed almost the same: we changed only 1 line of code, so such the approach is mostly harmless DX-wise. The only question is where should we request data then.

We could request it in the App component, but it would be a violation of the component approach, which we also don't want to do. But what can we do in this case?

And there are actually various ways to request it. I like to use my own solution: a wrapper on top of lazyWithPreload, which is itself is wrapper on top of React.lazy.

Own solution HOC: lazyWithPreloadAndPrefetch.tsx
export const lazyWithPreloadAndPrefetch = <T, Props>(
  request: () => Promise<{ default: ComponentType<T> }>,
  {
    usePrefetch,
    ...config
  }: TConfig & {
    usePrefetch: (props: Props) => T;
  },
): LazyPreloadableComponent<Props> => {
  const LazyComponent = lazyWithPreload(request, config);

  const Component = memo((props: Props) => {
    // eslint-disable-next-line @typescript-eslint/no-explicit-any
    return <LazyComponent {...(usePrefetch(props) as any)} />;
  }) as unknown as LazyPreloadableComponent<unknown>;

  Component.preload = LazyComponent.preload;

  return Component;
};
Enter fullscreen mode Exit fullscreen mode

And I couldn't find any convenient solution in third-party solutions, but we can still try utilizing @loadable/component again.

const Chapter1Lazy = lazyWithPreloadAndPrefetch(
  () => import(/* webpackChunkName: "Chapter1" */ /* webpackPrefetch: true */ './pages/chapter-1/Chapter1'), {
    usePrefetch: () => {
      const requestData = useApi<{}, { ping: 'pong' }>('POST', '/api/data');

      return { requestData };
    },
  });

// or

import loadable from '@loadable/component';

const Chapter1Lazy = loadable(() =>
  Promise.all([
    import(/* webpackChunkName: "Chapter1" */ /* webpackPrefetch: true */ './pages/chapter-1/Chapter1'),
    request<{}, { ping: 'pong' }>('POST', '/api/data'),
  ]).then(([Chapter1, data]) => {
    const Chapter1 = Chapter1.default;

    return function () {
      return <ComponentA data={data} />;
    };
  }),
  {
    fallback: <div>Loading components and data...</div>,
  }
);
Enter fullscreen mode Exit fullscreen mode

This way, we also managed not to violate the component approach. Even though, the request logic is not stored in the lazy component, it's still a part of Chapter1Lazy.

And now, our waterfall looks like this. Noticed that data is requested almost at the same time with lazy chunks? This way we download data from the server while our lazy chunks are being downloaded. The data is requested about 200 ms earlier, therefore it's displayed about 200 ms earlier. A win.

.

Although, one might object that this approach pushes data-fetching code into the initial bundle, which increases its size and hurts UX by slowing down the initial load. And that's fair. But if you keep the request logic as lean as possible, the extra weight will be negligible — while the overall loading experience will improve significantly.

And this approach won't worsen cacheability either, because initial javascript files always lose their cache regardless of what changes we make.

📌 Try to request data from the server without waiting for your lazy chunk to be downloaded.

A real life example

This approach is not limited to preloading data for displaying pages only. We can be creative about it. Let me show you how I utilized this approach in my current project. This is a simplified code of one of the pages:

export default ({ roomID }: { roomID: string }) => {
  const { data, isLoading } = useApi('POST', '/api/room', {
    params: { roomID },
  });

  return (
    <>
      <section className="left">
        <VideoPlayer data={data} isLoading={isLoading} />
      </section>

      <div className="right">
        {data && (
          <>
            <Card1Lazy room={data} roomID={roomID} />
            <Card2Lazy room={data} roomID={roomID} />
            <Card3Lazy room={data} roomID={roomID} />
          </>
        )}
      </div>
    </>
  );
}
Enter fullscreen mode Exit fullscreen mode

And each card used to look like this:

export default ({ room, roomID }: { room: RoomData, roomID: string }) => {
  const { data, isLoading } = useApi('POST', '/api/card/1', {
    params: { roomID },
  });

  return (...)
};
Enter fullscreen mode Exit fullscreen mode

The overall timeline of rendering these cards looked this way:

.

  1. The browser downloads initial JavaScript chunks
  2. The browser downloads lazy chunks for this particular page
  3. The browser requests data about a livestream room and waits until it gets the response.
    • Each of the cards needs room data to render UI but not to request data. But since the component is not rendered, the data is not requested.
  4. Then, for each of the cards, we download their own lazy chunk.
  5. Then, for each of the cards, we request data.

To display a card, users had to wait between 2.8 and 4.1 seconds, depending on card API latency.

But here's what I did:

  1. Started requesting room data together with loading page lazy chunks
  2. Started requesting card data right after page lazy chunks are loaded. Even though we can't render these cards, we should be able to request data.

.

Now, each of the cards wait for 3 parallel processes: livestream data, card API data, and card static files. Unless either of those 3 is not downloaded, the fallback is displayed. I parallelized the loading time as much as possible, and managed to reduce the display time for each of the card by approximately 1.4 seconds. Plus, video player now is displayed 0.2 seconds faster as well.

📌 Be creative.


Conclusion

Thank you for joining me once again on our journey to make our web applications indefinitely lazy. If you have any questions feel free to ask them in comments.

And to summarize this article, let's list the rules we learned today:

  • 📌 Use Webpack's prefetching and preloading for critically important chunks for your app.
  • 📌 Try preloading your lazy components manually, based on user interactions.
  • 📌 Try to request data from the server without waiting for your lazy chunk to be downloaded.
  • 📌 And most importantly: Be creative.

A-a-and that was it. I hope some did read all the articles of this series. And I hope some of you might even catch something new from them. So, what do you think? I'm overcomplicating it, or is lazy loading more complicated than you thought? Let me know in the comments.

Here are my social links: LinkedIn Telegram GitHub. See you ✌️

Top comments (0)