Cover image for Deep dive into lazy loading images 🖼

Deep dive into lazy loading images 🖼

carlesnunez profile image Carles Núñez ・7 min read

The first question is... why?

In the current web app world saving time and network when a user enters into our webpage means a higher chance of increasing engagement and a big opportunity to have a better user experience. Trust me when I say that in the most of the cases we are wasting a bunch of resources when our user loads a webpage. Resources like, for example network bandwidth.

There's no need to be an expert to realise that if one of the biggest problems in web development is wasting resources the solution could be stopping our user mobiles and computers from wasting them, right?

Don't load more than what you need

This is a concept that not only come from web development but from game development and in this field they call it Viewing-frustum culling which, according to wikipedia, is:

the process of removing objects that lie completely outside the viewing frustum from the rendering process. Rendering these objects would be a waste of time since they are not directly visible.

if we translate this sentence to the web development environment we could see that our viewing frustum is the above-the-fold of our webpage.


Why in my opinion native lazy-loading is not an option

Starting from chrome 76 you are able to use the loading attribute to lazy-load resources without the need to write custom lazy-loading code or use a separate JavaScript library. This was my approach the first time that I implemented an image lazy load strategy on a site but after implementing the code.... nothing was happening. Why?

In order to understand better what was happening I decided to deep dive into chromium code and understand better how chromium engineers where implementing their lazy load solution in order to understand what I was doing wrong.

How does native lazy load works?

The browser will call the next function in order to initialise image monitoring for lazy load check the code here:

void LazyImageHelper::StartMonitoring(blink::Element* element) {
  Document* document = GetRootDocumentOrNull(element);
  if (!document)

  // Getting messages in order to perform console.log operations latter if an attribute is not ok.
  using DeferralMessage = LazyLoadImageObserver::DeferralMessage;
  auto deferral_message = DeferralMessage::kNone;
  if (auto* html_image = ToHTMLImageElementOrNull(element)) {
    // Get loading att value, it can be eager, lazy auto or nothing.
    LoadingAttrValue loading_attr = GetLoadingAttrValue(*html_image);
    DCHECK_NE(loading_attr, LoadingAttrValue::kEager);
    if (loading_attr == LoadingAttrValue::kAuto) {
      deferral_message = DeferralMessage::kLoadEventsDeferred;
    } else if (!IsDimensionAbsoluteLarge(*html_image)) {
      DCHECK_EQ(loading_attr, LoadingAttrValue::kLazy);
      deferral_message = DeferralMessage::kMissingDimensionForLazy;

  // Here is where all start: Call the lazy load image observer and start monitoring
      document, element, deferral_message);

This code snippet lead to the StartMonitoringNearViewport function which does the next:

void LazyLoadImageObserver::StartMonitoringNearViewport(
    Document* root_document,
    Element* element,
    DeferralMessage deferral_message) {

  if (!lazy_load_intersection_observer_) { // 1
    lazy_load_intersection_observer_ = IntersectionObserver::Create(
            GetLazyImageLoadingViewportDistanceThresholdPx(*root_document))}, // 2
        {std::numeric_limits<float>::min()}, root_document,
        WTF::BindRepeating(&LazyLoadImageObserver::LoadIfNearViewport, // 3

In order to follow the flow I've put numbers on some lines that I'll explain below.

What does this code exactly do?

1 - They check if an intersection observer has been created before, otherwise they create it.

Don't you see? They use the same implementation on lazy loading images natively as with a javascript library but using low-level intersection observer API, isn't it amazing? 🙂

2 - Calling GetLazyLoadImageLoadingViewportDistanceThresholdPX : this function will get the needed threshold to load images based on the network that you are using.

Here you have the code implementation but if you don't care about implementation you can jump directly to the table below for more info about thresholds:

int GetLazyImageLoadingViewportDistanceThresholdPx(const Document& document) {
  const Settings* settings = document.GetSettings();
  if (!settings)
    return 0;

  switch (GetNetworkStateNotifier().EffectiveType()) {
    case WebEffectiveConnectionType::kTypeUnknown:
      return settings->GetLazyImageLoadingDistanceThresholdPxUnknown();
    case WebEffectiveConnectionType::kTypeOffline:
      return settings->GetLazyImageLoadingDistanceThresholdPxOffline();
    case WebEffectiveConnectionType::kTypeSlow2G:
      return settings->GetLazyImageLoadingDistanceThresholdPxSlow2G();
    case WebEffectiveConnectionType::kType2G:
      return settings->GetLazyImageLoadingDistanceThresholdPx2G();
    case WebEffectiveConnectionType::kType3G:
      return settings->GetLazyImageLoadingDistanceThresholdPx3G();
    case WebEffectiveConnectionType::kType4G:
      return settings->GetLazyImageLoadingDistanceThresholdPx4G();
  return 0;

So according to the native configuration json5 code we can see that regarding on our internet connexion we will have one or another threshold but this threshold will be always >= 3000px which honestly is alot

Network Threshold
Slow 2g 8000px
2g 6000px
3g 4000px
4g 3000px
Offline 8000px
Unknown 5000px

3 - And finally, will call the 'callback' function which will do the next (check full snippet):

void LazyLoadImageObserver::LoadIfNearViewport(
    const HeapVector<Member<IntersectionObserverEntry>>& entries) {

  for (auto entry : entries) {
    Element* element = entry->target();
    auto* image_element = DynamicTo<HTMLImageElement>(element);
    // If the loading_attr is 'lazy' explicitly, we'd better to wait for
    // intersection.
    if (!entry->isIntersecting() && image_element &&
        !EqualIgnoringASCIICase(image_element->FastGetAttribute(html_names::kLoadingAttr), "lazy")) {
      // Fully load the invisible image elements. The elements can be invisible
      // by style such as display:none, visibility: hidden, or hidden via
      // attribute, etc. Style might also not be calculated if the ancestors
      // were invisible.
      const ComputedStyle* style = entry->target()->GetComputedStyle();
      if (!style || style->Visibility() != EVisibility::kVisible ||
          style->Display() == EDisplay::kNone) {
        // Check that style was null because it was not computed since the
        // element was in an invisible subtree.
        DCHECK(style || IsElementInInvisibleSubTree(*element));
    if (!entry->isIntersecting())
    if (image_element)

    // Load the background image if the element has one deferred.
    if (const ComputedStyle* style = element->GetComputedStyle())


You can check others point of view regarding this topic here

So you say that I should use a JS library but... which one?

Taking the web.dev article Lazy Loading Images and Video I invested a bit of time into analysing the different options that we have and the pros and cons of some of them.

Analysing the state of the art

First of all I checked which solutions we currently have on the market based on web.dev recommendations, how well maintained and how much popularity they have on the community.

We have 4 recommendations and all of them rely on IntersectionObserver API to perform their work.

I will analyse them using five metrics:

  • Stars
  • Releases
  • Public repos using it
  • Contributors
  • Library size
  • NPM Download trend


Library name ⭐️ Stars 🚀 Releases 📦 Used by 👥 Contributors 🏋🏽‍♂️ Size
Lozad 6.2k 17 1.5k 31 1kb
Blazy 2.6k 19 541 3 1.9kb
Yall 1k 13 69 13 1kb
Lazy Sizes 13.3k 100 11.2k 38 3.3kb

NPM Trends



It seems that lazysizes is the most community supported library, but is also the heaviest so I'm going to select for my tests and benchmarks TWO of the libraries.

  • Lazysizes
  • Lozad

Field test

In order to check which library have a better API I decided to perform a small test on a codesandbox site and check how each implementation behave.


import React, { useEffect } from 'react';
import lozad from 'lozad';

export default ({ src, ...other }) => {
  const { observe } = lozad();

  useEffect(() => {
  }, []);

  return <img className="lozad" data-src={src} {...other} />;

Lozad uses a className as an identifier for the library in order to replace the data-src by a real src attribute in order to load the image.

It also uses an observe function in order to observe the element. The observe function is a function that will mark elements as loaded, so multiple calls to that function shouldn't affect performance at all. You can check the code implementation of that function on load.js source code - here.


import React from 'react';
import 'lazysizes';
import 'lazysizes/plugins/attrchange/ls.attrchange';

export default ({ src, ...other }) => {
  return <img className="lazyload" data-src={src} {...other} />;

LazySizes have a similar api to lozad but you don't need to call observe function, it will be called automatically on import. On the other side if you perform data-src changes dynamically you have to add a plugin that will watch the data-src value so if it changes it will re-trigger the image load function.

More info about ls.attrchange here

Summary: The good and the bad

Lozad PROS 👍

  • Lozad is a really tiny library (only 1kb!)
  • Lozad is really easy to use and give us autonomy on calling observe and unobserve methods
  • It loads only what needs to load with the default threshold (2 images on mobile)
  • It is configurable

Lozad CONS 👎

  • Running the observable on each component is not something that I like and even is not a problem on performance, I wouldn't like to have a lozad.observe outside the lazy image component definition, the solution must be provided as it is, with no extra work.
  • They are not clear on if the library is SEO compliant and this is a problem if you care about SEO - more info here

LazySizes PROS 👍

  • The api is really easy to use
  • The community behind is huge
  • It is the library recommended by google
  • It is fully SEO compliant
  • It has the possibility of extending its capacity with plugins check here
  • It is also configurable
  • It works right out of the box you only need to import the library

LazySizes CONS 👎

  • Library size is the triple of lozad
  • If you want to configure it you have to put a config object on the window which is not so elegant.

General tradeoff to consider if you care about SSR

  • We are lazy loading images using a library that is imported and consumed on our bundles, this means that we loose the SSR power on images as this JS code must be loaded in order to show images on first render. But it shouldn't be a problem at least that you have a big amount of JS to load on your bundle.


In my opinion in this case the community and google have picked the correct library to trust on, lazy sizes have slight differences that give us the balance between size, usability and maintainability.

My recomendation is give lazysizes an opportunity and test its viability.

Head Photo by Kate Stone Matheson on Unsplash

Posted on May 30 by:

carlesnunez profile

Carles Núñez


Javascript passionate, games developer and teacher


markdown guide

Nice article, Carles.

Would you comment on why you think that 3.3kb (LazySizes) as compared to 1kb (lozad) is worth mentioning as a con?

Specifically, I'd like to know why you think it is such a big deal (the smaller, the better, sure). I mean, you are certainly going to save a lot of bandwidth using either solution, so the +2.3kb shouldn't really matter that much given that LazySizes otherwise has enormous advantages over lozad (as per your own comparison).

Thank you. Keep it up! 👍


Hello Sebastian!

First of all thanks for taking your time to read the article.

You are totally right when you say that a difference of 2.3kb shouldn’t be taken as a con. What I was pretending when I wrote the article was give visibility to the fact that different approaches for a similar problem end up with an increase of bundle size.

But theres nothing wrong at all into an increase of only 2.3kb

Thank you for your time sebastian!


Hey Carles! You demonstrated a firm and relying approach to analyzing existent solutions available on the market npm registry.

I'm thankful that now I have a reasonable article I can share with developers (especially, juniors lacking experience in the matter you explained).

I'd also like to share a few cents from my experience on criteria that are used for selecting a library.

  1. Stars - I suggest weighting this metric as the least important when going through library selection process. People tend to use stars as a sign of popularity and thus a basis that the library can be dependent upon - while popularity and trustworthiness may overlap, the first is not the guarantee of the second.
    I would rather recommend relying on a library with only 400 stars having good PR merging trend and up-to-date tests than on a library with 4k stars having a couple of tests altogether and only rare visits to the issues from maintainers.
    I believe, Carles implied this in the article but because It didn't get enough attention, I decided to elaborate.

  2. Tests - when selecting a library, check whether there are any tests and whether they are green. You can simply clone the repo and run npm run test. It is not 100% coverage I usually looking here for, but rather a number of tests the prove the main functionality is working (sometimes worth checking on different platforms) and that you can actually read and understand the tests well.

This way (when there are tests and they clearly understandable), you may fix an issue if it is ever found in your project, without discarding the library altogether (which in some cases may require to re-write a lot of code and may cost quite a few resources of your team).

Even if the library maintainers do not respond to the pull request, you can still bring the fork to your company's npm register (if available) or create your scoped npm package (e.g. @your-name/lazysizes).

Hopefully, these few points will be useful to engineers bringing new packages to their projects.

Thanks again for the article, learned quite a few things myself. Never went as far as diving into an engine implementation details myself, but will sure try this out 😉


Hey Vitaly thank you so much for your explanation it will be helpful for alot of people too.

Yes, Star number on a repo is not the most important metric to look for when deciding which library should we use but is ok to have a look to it. As a funny history about why stars are not that important... I’ve been working with react for more than five years and I realised THIS YEAR that I wasn’t starred the repo yet 😂✌️🤦‍♂️

Glad to have your feedback Vitaly!


Thank you! Hopefully it will help people in the future! :)

Share that article if you feel that can be helpful to other people!


Anytime! And yes for sure 😊


👏 saved me a ton of time here, I was about to start looking for info. Outstanding.


Hey Adam! Happy to hear that. If you need any help during lazy load implementation ping me here or at twitter @carlesnunez :)


I'm using native for a long time, had no complains. And what is great with native is that it is native, and will be improved across the time, like all other native features.


Hello Pawel! Thanks for your comment.

Different point of view and probably need for a webapp between your cases and mine.

Native lazy load will be a thing but IMHO is not enough mature yet to adopt it at least in my scenarios.


Thank you! This is really interesting 🤩


Thank you for taking your time to reading it!