DEV Community

Cover image for 10 Proven JavaScript Network Performance Techniques for Faster Web Apps
Nithin Bharadwaj
Nithin Bharadwaj

Posted on

10 Proven JavaScript Network Performance Techniques for Faster Web Apps

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

When I first started building web applications, I assumed fast internet was a given. I quickly learned that users face a staggering variety of network conditions. A slow-loading site isn't just an inconvenience; it's a barrier. My journey into optimizing network performance began with this simple goal: make things feel instant, no matter the connection.

The techniques I'll share aren't just theoretical. They are practical methods I've implemented to serve users on sluggish mobile networks, congested Wi-Fi, and everything in between. They focus on one core idea: being intelligent about what we load, when we load it, and how we manage the connection itself.

Let's start with a fundamental concept. Instead of waiting for a user to click a link, we can predict what they might need next and prepare it quietly in the background. This is like a thoughtful assistant who gathers documents before a meeting.

The following system tracks where users go on your site, learns patterns, and prefetches resources for the most likely next page. It respects data saver modes and adjusts its behavior based on the network speed.

class PredictivePreloader {
  constructor() {
    this.visitHistory = [];
    this.pageCache = new Map();
    this.activePrefetches = new AbortController();
    this.historyLimit = 50;

    this.watchUserNavigation();
    this.watchForLazyElements();
    this.listenToNetworkChanges();
  }

  watchUserNavigation() {
    // Monitor all page changes within the app
    const originalPush = history.pushState;
    const originalReplace = history.replaceState;

    history.pushState = function(...args) {
      originalPush.apply(this, args);
      this.logPageVisit(window.location.pathname);
    }.bind(this);

    history.replaceState = function(...args) {
      originalReplace.apply(this, args);
      this.logPageVisit(window.location.pathname);
    }.bind(this);

    window.addEventListener('popstate', () => {
      this.logPageVisit(window.location.pathname);
    });

    // Watch link hovers for quick intent
    document.addEventListener('mouseover', (event) => {
      const link = event.target.closest('a[href]');
      if (link && link.href.startsWith(window.location.origin)) {
        this.considerQuickPrefetch(new URL(link.href).pathname);
      }
    });
  }

  logPageVisit(path) {
    this.visitHistory.push({
      path: path,
      time: Date.now()
    });

    // Keep our history list manageable
    if (this.visitHistory.length > this.historyLimit) {
      this.visitHistory.shift();
    }
    this.refreshPredictions();
  }

  refreshPredictions() {
    const pathCounts = new Map();
    // Simple analysis: what page often comes after the current one?
    for (let i = 1; i < this.visitHistory.length; i++) {
      const prev = this.visitHistory[i - 1].path;
      const curr = this.visitHistory[i].path;
      if (prev === window.location.pathname) {
        pathCounts.set(curr, (pathCounts.get(curr) || 0) + 1);
      }
    }

    // Convert to a sorted list of likely next pages
    const likelyNextPages = Array.from(pathCounts.entries())
      .map(([path, count]) => ({ path, count }))
      .sort((a, b) => b.count - a.count);

    this.organizeBackgroundLoads(likelyNextPages);
  }

  organizeBackgroundLoads(predictions) {
    // Stop any ongoing prefetches that are no longer relevant
    this.activePrefetches.abort();
    this.activePrefetches = new AbortController();

    let loadLimit = 2; // Default conservative limit

    // Let the user's connection guide our aggressiveness
    if (navigator.connection) {
      if (navigator.connection.saveData) {
        loadLimit = 0; // Respect data saver mode
      } else if (navigator.connection.effectiveType === '4g') {
        loadLimit = 4;
      } else if (navigator.connection.effectiveType === '3g') {
        loadLimit = 1;
      } else {
        loadLimit = 0; // Be very conservative on 2G/slow-2G
      }
    }

    // Select the top N pages to prefetch
    const selectedPredictions = predictions.slice(0, loadLimit);
    selectedPredictions.forEach(pred => {
      this.fetchPageResources(pred.path);
    });
  }

  async fetchPageResources(path) {
    if (this.pageCache.has(path)) {
      return; // Already have it
    }

    try {
      const response = await fetch(path, {
        method: 'GET',
        headers: { 'X-Prefetch': 'true' },
        signal: this.activePrefetches.signal
      });

      const html = await response.text();
      // Store the raw HTML
      this.pageCache.set(path, {
        content: html,
        storedAt: Date.now()
      });

      // Now, parse it to find and preload key assets like CSS and JS
      this.findAndPreloadAssets(html);
    } catch (err) {
      if (err.name !== 'AbortError') {
        console.debug('Prefetch did not complete:', err.message);
      }
    }
  }

  findAndPreloadAssets(html) {
    const template = document.createElement('template');
    template.innerHTML = html;
    const doc = template.content;

    // Preload critical stylesheets
    doc.querySelectorAll('link[rel="stylesheet"]').forEach(link => {
      const href = link.href;
      if (href) {
        const preloadLink = document.createElement('link');
        preloadLink.rel = 'preload';
        preloadLink.href = href;
        preloadLink.as = 'style';
        document.head.appendChild(preloadLink);
      }
    });

    // Preload key scripts
    doc.querySelectorAll('script[src]').forEach(script => {
      if (!script.async && !script.defer) { // Blocking scripts are critical
        const preloadLink = document.createElement('link');
        preloadLink.rel = 'preload';
        preloadLink.href = script.src;
        preloadLink.as = 'script';
        document.head.appendChild(preloadLink);
      }
    });
  }

  considerQuickPrefetch(path) {
    // A simpler, faster prefetch triggered by mouseover
    if (!this.pageCache.has(path) && navigator.connection?.effectiveType !== 'slow-2g') {
      const link = document.createElement('link');
      link.rel = 'prefetch';
      link.href = path;
      document.head.appendChild(link);
    }
  }
}

// Initialize the preloader
const pagePreloader = new PredictivePreloader();
Enter fullscreen mode Exit fullscreen mode

The second technique is about respecting what the user can see. There's no point in loading an image at the bottom of a long page before the content at the top. This approach, often called lazy loading, has native browser support now, but a custom implementation gives us more control.

We can use the Intersection Observer API to detect when an element is about to come into the viewport. Then, we load it just in time. This drastically reduces initial page weight.

class ViewportLoader {
  constructor() {
    this.observer = null;
    this.imageQuality = 'high'; // default
    this.setupViewportWatch();
    this.setImageQualityBasedOnNetwork();
  }

  setupViewportWatch() {
    // Create an observer that triggers when elements are 100px from entering the viewport
    this.observer = new IntersectionObserver((items) => {
      items.forEach(item => {
        if (item.isIntersecting) {
          const element = item.target;
          this.loadElement(element);
          this.observer.unobserve(element); // Stop watching once loaded
        }
      });
    }, {
      rootMargin: '100px 0px', // Start loading 100px before element enters viewport
      threshold: 0.01
    });

    // Find all elements marked for lazy loading
    document.querySelectorAll('img[data-src], iframe[data-src], [data-bg-src]').forEach(el => {
      this.observer.observe(el);
    });
  }

  loadElement(element) {
    // Handle lazy images and iframes
    if (element.tagName === 'IMG' || element.tagName === 'IFRAME') {
      if (element.dataset.src) {
        element.src = this.getOptimalSource(element.dataset.src, element);
        delete element.dataset.src;
      }
    }
    // Handle lazy background images (e.g., on a div)
    if (element.dataset.bgSrc) {
      element.style.backgroundImage = `url('${this.getOptimalSource(element.dataset.bgSrc, element)}')`;
      delete element.dataset.bgSrc;
    }
  }

  getOptimalSource(originalSrc, element) {
    // Adaptive image loading based on network and screen size
    const connection = navigator.connection;
    const screenWidth = window.innerWidth;

    if (this.imageQuality === 'low' && element.dataset.lowSrc) {
      return element.dataset.lowSrc;
    }
    if (this.imageQuality === 'medium' && element.dataset.mediumSrc) {
      return element.dataset.mediumSrc;
    }
    // Use srcset if available for responsive images
    if (element.dataset.srcset) {
      element.srcset = element.dataset.srcset;
      delete element.dataset.srcset;
    }
    return originalSrc;
  }

  setImageQualityBasedOnNetwork() {
    if (!navigator.connection) return;

    const conn = navigator.connection;
    if (conn.saveData === true || conn.effectiveType.includes('2g')) {
      this.imageQuality = 'low';
      document.documentElement.classList.add('data-saving-mode');
    } else if (conn.effectiveType === '3g') {
      this.imageQuality = 'medium';
    }
    // Listen for changes (e.g., user switches from Wi-Fi to cellular)
    conn.addEventListener('change', this.handleNetworkChange.bind(this));
  }

  handleNetworkChange() {
    console.log('Network changed, adjusting loading strategy.');
    this.setImageQualityBasedOnNetwork();
    // You could re-evaluate elements not yet loaded here
  }
}

const viewportManager = new ViewportLoader();
Enter fullscreen mode Exit fullscreen mode

The third technique deals with connection management. Browsers limit the number of simultaneous connections to a single server. If you hit that limit, requests get queued. We can work around this by spreading requests across different domains (sharding) or, more intelligently, by reusing existing connections through a technique called connection pooling.

While we can't directly control TCP connections from JavaScript, we can influence how browsers manage them with HTTP/2 and by structuring our requests wisely.

class ConnectionManager {
  constructor() {
    this.requestQueue = [];
    this.activeRequests = 0;
    this.maxConcurrent = 6; // Typical browser limit per host
    this.hostPriorities = new Map(); // Give priority to certain domains
    this.init();
  }

  init() {
    // Intercept fetch requests to manage concurrency
    const originalFetch = window.fetch;
    window.fetch = this.managedFetch.bind(this, originalFetch);
  }

  managedFetch(originalFetch, resource, options = {}) {
    return new Promise((resolve, reject) => {
      const requestTask = {
        resource,
        options,
        resolve,
        reject
      };

      this.requestQueue.push(requestTask);
      this.processQueue();
    });
  }

  async processQueue() {
    if (this.activeRequests >= this.maxConcurrent || this.requestQueue.length === 0) {
      return;
    }

    // Sort queue: critical requests first (e.g., CSS, above-the-fold JS)
    this.requestQueue.sort((a, b) => {
      const aPri = this.getPriority(a.resource, a.options);
      const bPri = this.getPriority(b.resource, b.options);
      return bPri - aPri;
    });

    while (this.activeRequests < this.maxConcurrent && this.requestQueue.length > 0) {
      const task = this.requestQueue.shift();
      this.activeRequests++;
      this.executeFetch(task);
    }
  }

  getPriority(resource, options) {
    // Define what matters most
    const url = typeof resource === 'string' ? resource : resource.url;
    if (options.priority === 'high') return 100;
    if (url.includes('/critical.') || url.includes('/bootstrap.')) return 90;
    if (url.match(/\.css$/)) return 80; // Stylesheets are high priority
    if (url.match(/\.js$/)) return 70;
    if (url.match(/\.(jpg|png|webp)$/)) return 50;
    return 10; // Everything else
  }

  async executeFetch(task) {
    const originalFetch = window.fetch;
    try {
      // Add a header to signal keep-alive preference
      const headers = new Headers(task.options.headers || {});
      headers.set('Connection', 'Keep-Alive');

      const response = await originalFetch(task.resource, {
        ...task.options,
        headers
      });
      task.resolve(response);
    } catch (error) {
      task.reject(error);
    } finally {
      this.activeRequests--;
      this.processQueue(); // See if we can start the next request
    }
  }

  // Method to hint at pre-connections to important domains
  preconnectToCriticalOrigins() {
    const origins = [
      'https://fonts.googleapis.com',
      'https://cdn.example.com',
      'https://api.myapp.com'
    ];
    origins.forEach(origin => {
      const link = document.createElement('link');
      link.rel = 'preconnect';
      link.href = origin;
      link.crossOrigin = 'anonymous';
      document.head.appendChild(link);
    });
  }
}

const connectionHelper = new ConnectionManager();
connectionHelper.preconnectToCriticalOrigins();
Enter fullscreen mode Exit fullscreen mode

The fourth technique is about caching smarter, not just harder. The standard browser cache is useful, but we can implement our own strategies on top of it. One powerful pattern is "stale-while-revalidate." It means you serve a cached (possibly stale) version of a resource immediately, then fetch a fresh version in the background to update the cache for next time.

This pattern is perfect for data that updates frequently but where showing something quickly is better than showing nothing.

class SmartCache {
  constructor(cacheName = 'app-data') {
    this.cacheName = cacheName;
    this.cacheVersion = 'v1';
    this.setup();
  }

  async setup() {
    // Open a specific cache store
    this.cache = await caches.open(`${this.cacheName}-${this.cacheVersion}`);
  }

  async get(request) {
    const cached = await this.cache.match(request);
    const networkFetch = fetch(request).then(async response => {
      // Clone the response because it can only be consumed once
      if (response.ok) {
        await this.cache.put(request, response.clone());
      }
      return response;
    }).catch(err => {
      console.debug('Network fetch failed:', err);
      return null;
    });

    if (cached) {
      // Return cached immediately, but trigger network update
      networkFetch.then(freshResponse => {
        // Optional: Notify components of fresh data
        if (freshResponse) {
          this.dispatchUpdateEvent(request, freshResponse);
        }
      });
      return cached;
    }

    // No cache? Wait for the network.
    const freshResponse = await networkFetch;
    return freshResponse;
  }

  dispatchUpdateEvent(request, freshResponse) {
    // Create a custom event that other parts of your app can listen to
    const event = new CustomEvent('cacheUpdated', {
      detail: {
        url: request.url,
        response: freshResponse.clone() // Clone again for the event listener
      }
    });
    window.dispatchEvent(event);
  }

  // For API data (JSON), we can add a helper
  async getJSON(url, options = {}) {
    const request = new Request(url, options);
    const response = await this.get(request);
    if (response) {
      return response.json();
    }
    throw new Error(`Could not retrieve ${url}`);
  }

  // Clear old caches on version update
  static async cleanupOldCaches() {
    const keys = await caches.keys();
    const currentCacheName = 'app-data-v1'; // This should match your logic
    const oldCaches = keys.filter(key => key !== currentCacheName);
    await Promise.all(oldCaches.map(key => caches.delete(key)));
  }
}

// Usage
const dataCache = new SmartCache();

// Later, to fetch user data with stale-while-revalidate
async function loadUserProfile(userId) {
  const data = await dataCache.getJSON(`/api/user/${userId}`);
  console.log('Data loaded (from cache or network):', data);
  return data;
}

// Another part of the app can listen for background updates
window.addEventListener('cacheUpdated', (event) => {
  if (event.detail.url.includes('/api/user/')) {
    console.log('User data was updated in the background.');
    // You might update a UI component here
  }
});
Enter fullscreen mode Exit fullscreen mode

The fifth technique focuses on resource hints. These are simple HTML tags or HTTP headers that give the browser clues about what you'll need soon. They are low-cost, high-impact instructions.

The main ones are preconnect, preload, prefetch, and prerender. Using them correctly can shave precious milliseconds off your load times.

class ResourceHintManager {
  constructor() {
    this.injectedHints = new Set();
  }

  // Tell the browser to set up a connection to another origin
  preconnect(url, options = { crossOrigin: 'anonymous' }) {
    const link = document.createElement('link');
    link.rel = 'preconnect';
    link.href = url;
    if (options.crossOrigin) link.crossOrigin = options.crossOrigin;
    document.head.appendChild(link);
    this.injectedHints.add(`preconnect:${url}`);
  }

  // Force the browser to fetch a resource it WILL need for the current page
  preload(resourceUrl, resourceType) {
    const link = document.createElement('link');
    link.rel = 'preload';
    link.href = resourceUrl;
    link.as = resourceType;
    // For fonts, crossorigin is required
    if (resourceType === 'font') {
      link.crossOrigin = 'anonymous';
    }
    document.head.appendChild(link);
    this.injectedHints.add(`preload:${resourceUrl}`);
  }

  // Suggest the browser fetch a resource it MIGHT need for the next navigation
  prefetch(resourceUrl, resourceType = 'fetch') {
    // Only prefetch if the connection is good
    const conn = navigator.connection;
    if (conn && (conn.saveData || conn.effectiveType.includes('2g'))) {
      return; // Skip on slow connections or data saver
    }

    const link = document.createElement('link');
    link.rel = 'prefetch';
    link.href = resourceUrl;
    link.as = resourceType;
    document.head.appendChild(link);
    this.injectedHints.add(`prefetch:${resourceUrl}`);
  }

  // Automatically generate hints for critical assets found in the page
  autoGenerateFromPage() {
    // Preload critical CSS
    document.querySelectorAll('link[rel="stylesheet"][media="all"]').forEach(link => {
      if (link.href && !this.injectedHints.has(`preload:${link.href}`)) {
        this.preload(link.href, 'style');
      }
    });

    // Preload key fonts
    document.querySelectorAll('style, link[rel="stylesheet"]').forEach(el => {
      const text = el.textContent || '';
      const fontRegex = /url\(["']?([^"')]+\.woff2?)["']?\)/gi;
      let match;
      while ((match = fontRegex.exec(text)) !== null) {
        const fontUrl = match[1];
        if (!fontUrl.startsWith('data:') && !this.injectedHints.has(`preload:${fontUrl}`)) {
          this.preload(fontUrl, 'font');
        }
      }
    });
  }
}

const hints = new ResourceHintManager();
// Use early, as soon as the <head> is available
hints.preconnect('https://fonts.gstatic.com');
hints.preload('/css/critical.css', 'style');
hints.autoGenerateFromPage();
Enter fullscreen mode Exit fullscreen mode

The sixth technique is about compressing what we send. While gzip and Brotli compression are usually handled server-side, we can make smart choices on the client about what data format to request. For example, using WebP images instead of JPEG/PNG when the browser supports it, or fetching JSON instead of XML.

We can write a simple helper to always request the most efficient format.

class EfficientRequester {
  constructor() {
    this.supportedFormats = this.detectSupport();
  }

  detectSupport() {
    // Detect support for modern formats
    const canvas = document.createElement('canvas');
    const supports = {
      webp: false,
      avif: false,
      brotli: false,
    };

    // Check WebP
    canvas.width = canvas.height = 1;
    supports.webp = canvas.toDataURL('image/webp').indexOf('image/webp') > -1;

    // Check Brotli via Accept-Encoding header (best guess)
    supports.brotli = 'Br' in (new CompressionStream ? {} : 0); // More reliable check needed
    // A common check is to see if the response header `content-encoding` includes 'br' from a test request.

    // We'll assume AVIF check is similar (more complex in reality)
    return supports;
  }

  getOptimalImageUrl(baseUrl) {
    // Given '/images/hero.jpg', return the best format URL
    const ext = baseUrl.split('.').pop();
    const base = baseUrl.substring(0, baseUrl.lastIndexOf('.'));

    if (this.supportedFormats.avif) {
      return `${base}.avif`;
    } else if (this.supportedFormats.webp) {
      return `${base}.webp`;
    }
    return baseUrl; // Fallback to original
  }

  async fetchJSON(url, options = {}) {
    // Ensure we request compressed responses
    const headers = new Headers(options.headers || {});
    headers.set('Accept', 'application/json');
    headers.set('Accept-Encoding', 'gzip, deflate, br'); // Request Brotli

    const response = await fetch(url, { ...options, headers });

    // Check if we got Brotli compressed response (optional logging)
    const encoding = response.headers.get('content-encoding');
    if (encoding && encoding.includes('br')) {
      console.debug('Received Brotli compressed response.');
    }

    return response.json();
  }

  // For dynamically loading scripts or styles, choose the minified version
  loadScript(src, options = {}) {
    const script = document.createElement('script');
    // Prefer .min.js if not already specified and we are in production
    const finalSrc = src.includes('.min.js') || process.env.NODE_ENV !== 'production' 
      ? src 
      : src.replace('.js', '.min.js');

    script.src = finalSrc;
    script.async = options.async !== false;
    script.defer = options.defer !== false;

    return new Promise((resolve, reject) => {
      script.onload = resolve;
      script.onerror = reject;
      document.head.appendChild(script);
    });
  }
}

const requester = new EfficientRequester();

// Usage for an image element
const optimalHeroUrl = requester.getOptimalImageUrl('/assets/hero.jpg');
document.querySelector('#hero-image').src = optimalHeroUrl;

// Usage for API call
requester.fetchJSON('/api/data').then(data => console.log(data));
Enter fullscreen mode Exit fullscreen mode

The seventh technique is about grouping and batching requests. Instead of sending ten small API calls when a page loads, we can sometimes combine them into one or two larger calls. This reduces the overhead of establishing multiple HTTP connections.

This requires coordination with your backend API design, but the frontend can facilitate it.

class RequestBatcher {
  constructor(delayMs = 50) {
    this.delay = delayMs;
    this.batchQueue = new Map(); // Key: endpoint, Value: array of request callbacks
    this.timeoutId = null;
  }

  // Schedule a request to be batched
  schedule(endpoint, data) {
    return new Promise((resolve, reject) => {
      if (!this.batchQueue.has(endpoint)) {
        this.batchQueue.set(endpoint, []);
      }
      this.batchQueue.get(endpoint).push({ data, resolve, reject });

      // Start the batch timer if not already running
      if (!this.timeoutId) {
        this.timeoutId = setTimeout(() => this.flush(), this.delay);
      }
    });
  }

  // Send all batched requests
  async flush() {
    this.timeoutId = null;
    const promises = [];

    for (const [endpoint, requests] of this.batchQueue.entries()) {
      if (requests.length === 0) continue;

      // Combine data from all requests for this endpoint
      const combinedData = requests.map(r => r.data);

      // Send a single batch request
      const batchPromise = fetch(endpoint, {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ batch: combinedData })
      })
      .then(async response => {
        const results = await response.json(); // Assume backend returns an array of results
        // Match each result back to its original request
        requests.forEach((request, index) => {
          if (results[index] && results[index].status === 'success') {
            request.resolve(results[index].data);
          } else {
            request.reject(new Error('Batch item failed'));
          }
        });
      })
      .catch(error => {
        // If the whole batch fails, reject all individual promises
        requests.forEach(request => request.reject(error));
      });

      promises.push(batchPromise);
    }

    // Clear the queue
    this.batchQueue.clear();

    await Promise.all(promises);
  }

  // For immediate needs, bypass batching
  fetchImmediately(endpoint, data) {
    return fetch(endpoint, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify(data)
    }).then(r => r.json());
  }
}

// Example: Batching multiple user status updates
const batcher = new RequestBatcher(100); // Batch every 100ms

async function updateUserSettings(settings) {
  // This will be batched with other calls to '/api/user/settings' within 100ms
  return batcher.schedule('/api/user/settings', settings);
}

// In your UI code, call this multiple times rapidly
updateUserSettings({ theme: 'dark' });
updateUserSettings({ notifications: false });
// Only one network request will be made for both updates
Enter fullscreen mode Exit fullscreen mode

The eighth technique is progressive loading and hydration. For complex web apps, especially Single Page Applications (SPAs), we can send a basic HTML shell first, then progressively enhance it with JavaScript. This makes the initial render very fast. The JavaScript then "hydrates" the page, attaching event listeners and making it interactive.

This pattern is at the heart of many modern frameworks like Next.js and Nuxt.js. Here's a simplified manual approach.

class ProgressiveEnhancer {
  constructor() {
    this.interactiveElements = new Map();
  }

  // Mark an element that will become interactive after JS loads
  markForHydration(elementId, componentName) {
    const element = document.getElementById(elementId);
    if (element) {
      element.classList.add('js-pending'); // Visual cue (e.g., dimmed)
      this.interactiveElements.set(elementId, {
        component: componentName,
        element: element,
        hydrated: false
      });
    }
  }

  // After main JS loads, hydrate all marked elements
  hydratePage() {
    console.log('Hydrating interactive components...');
    this.interactiveElements.forEach((info, id) => {
      this.hydrateComponent(info.component, info.element);
      info.hydrated = true;
      info.element.classList.remove('js-pending');
      info.element.classList.add('js-active');
    });
  }

  hydrateComponent(name, element) {
    // This would be a switch to your actual component initialization logic
    switch (name) {
      case 'SearchBox':
        this.initializeSearchBox(element);
        break;
      case 'ImageCarousel':
        this.initializeCarousel(element);
        break;
      // ... other components
    }
  }

  initializeSearchBox(element) {
    const input = element.querySelector('input');
    const button = element.querySelector('button');
    // Attach real event listeners
    button.addEventListener('click', () => {
      console.log('Searching for:', input.value);
      // Actual search logic here
    });
    // Input could now have autocomplete, etc.
  }

  initializeCarousel(element) {
    // Turn a static list of images into a sliding carousel
    console.log('Making carousel interactive:', element);
  }

  // Load non-critical JS after the page is stable
  loadSecondaryScripts() {
    // Wait for main thread to be idle
    if ('requestIdleCallback' in window) {
      requestIdleCallback(() => {
        this.loadScript('/js/analytics.js');
        this.loadScript('/js/chat-widget.js');
      });
    } else {
      // Fallback: wait a few seconds
      setTimeout(() => {
        this.loadScript('/js/analytics.js');
        this.loadScript('/js/chat-widget.js');
      }, 3000);
    }
  }

  loadScript(src) {
    const script = document.createElement('script');
    script.src = src;
    script.async = true;
    document.body.appendChild(script);
  }
}

// In your HTML, you'd have:
// <div id="search-box" class="static-search">...basic HTML form...</div>
// <div id="product-carousel">...static images...</div>

// Early in your page's JS execution:
const enhancer = new ProgressiveEnhancer();
enhancer.markForHydration('search-box', 'SearchBox');
enhancer.markForHydration('product-carousel', 'ImageCarousel');

// After the main app framework loads and the DOM is ready:
document.addEventListener('DOMContentLoaded', () => {
  enhancer.hydratePage();
  enhancer.loadSecondaryScripts();
});
Enter fullscreen mode Exit fullscreen mode

The ninth technique is about monitoring and adapting in real-time. We can measure how long actual requests take and adjust our strategy. For example, if we detect a very slow network, we can downgrade image quality further or disable certain non-essential features.

We create a network performance monitor that collects data and provides feedback to other parts of our optimization system.

class NetworkMonitor {
  constructor() {
    this.metrics = [];
    this.slowThreshold = 5000; // 5 seconds
    this.reportingUrl = '/api/network-metrics';
    this.startMonitoring();
  }

  startMonitoring() {
    // Use the Performance Timing API
    if (window.performance && performance.getEntriesByType) {
      // Listen for new resource fetches
      const obs = new PerformanceObserver((list) => {
        list.getEntries().forEach(entry => {
          this.recordMetric(entry);
          this.analyzeAndAdapt(entry);
        });
      });
      obs.observe({ entryTypes: ['resource', 'navigation'] });

      // Also capture initial page load metrics
      window.addEventListener('load', () => {
        const navEntry = performance.getEntriesByType('navigation')[0];
        if (navEntry) this.recordMetric(navEntry);
      });
    }
  }

  recordMetric(entry) {
    const metric = {
      name: entry.name,
      duration: entry.duration,
      type: entry.entryType,
      size: entry.transferSize || 0,
      startTime: entry.startTime,
      protocol: entry.nextHopProtocol
    };
    this.metrics.push(metric);

    // Keep only recent metrics
    if (this.metrics.length > 100) {
      this.metrics.shift();
    }

    // Optionally report to analytics
    this.reportToAnalytics(metric);
  }

  analyzeAndAdapt(entry) {
    // If a critical resource is very slow, trigger adaptations
    if (entry.duration > this.slowThreshold) {
      console.warn(`Slow resource detected: ${entry.name} took ${entry.duration}ms`);
      this.triggerFallbackMode();
    }

    // Calculate average latency
    const recent = this.metrics.slice(-10);
    const avgDuration = recent.reduce((sum, m) => sum + m.duration, 0) / recent.length;
    if (avgDuration > 2000) {
      // Consistently slow network
      document.documentElement.classList.add('network-slow');
    } else {
      document.documentElement.classList.remove('network-slow');
    }
  }

  triggerFallbackMode() {
    // Disable fancy animations, use low-res images, etc.
    const style = document.createElement('style');
    style.id = 'fallback-styles';
    style.textContent = `
      .network-slow * {
        animation-duration: 0.01ms !important;
        transition-duration: 0.01ms !important;
      }
      .network-slow [data-low-res] {
        background-image: attr(data-low-res) !important;
      }
    `;
    // Only add once
    if (!document.getElementById('fallback-styles')) {
      document.head.appendChild(style);
    }
  }

  reportToAnalytics(metric) {
    // Use sendBeacon for efficient, non-blocking reporting
    if (navigator.sendBeacon) {
      const data = new Blob([JSON.stringify(metric)], { type: 'application/json' });
      navigator.sendBeacon(this.reportingUrl, data);
    }
  }

  getCurrentHealth() {
    const recent = this.metrics.slice(-5);
    if (recent.length === 0) return 'unknown';
    const avg = recent.reduce((s, m) => s + m.duration, 0) / recent.length;
    if (avg < 1000) return 'good';
    if (avg < 3000) return 'moderate';
    return 'poor';
  }
}

const monitor = new NetworkMonitor();

// Other parts of your app can check network health
function shouldLoadHeavyFeature() {
  const health = monitor.getCurrentHealth();
  return health === 'good';
}

if (shouldLoadHeavyFeature()) {
  // Load 3D model or large video
} else {
  // Show a placeholder or simplified version
}
Enter fullscreen mode Exit fullscreen mode

The tenth and final technique is about service workers for offline and low-network experiences. A service worker is a script that runs in the background, separate from your web page. It can intercept network requests, serve cached responses, and even provide custom offline pages.

This is a more advanced technique but offers the most robust control over network performance and reliability.

// In a separate file: service-worker.js

const CACHE_NAME = 'app-cache-v2';
const OFFLINE_URL = '/offline.html';

const CACHE_URLS = [
  '/',
  '/styles/main.css',
  '/scripts/app.js',
  '/images/logo.svg',
  OFFLINE_URL
];

// Install event: pre-cache critical resources
self.addEventListener('install', event => {
  event.waitUntil(
    caches.open(CACHE_NAME)
      .then(cache => cache.addAll(CACHE_URLS))
      .then(() => self.skipWaiting()) // Activate immediately
  );
});

// Activate event: clean up old caches
self.addEventListener('activate', event => {
  event.waitUntil(
    caches.keys().then(cacheNames => {
      return Promise.all(
        cacheNames.map(cacheName => {
          if (cacheName !== CACHE_NAME) {
            return caches.delete(cacheName);
          }
        })
      );
    }).then(() => self.clients.claim())
  );
});

// Fetch event: network-first with cache fallback
self.addEventListener('fetch', event => {
  // Skip non-GET requests and chrome-extension requests
  if (event.request.method !== 'GET' || event.request.url.startsWith('chrome-extension://')) {
    return;
  }

  event.respondWith(
    // Try network first
    fetch(event.request)
      .then(networkResponse => {
        // If successful, clone and cache it
        const responseClone = networkResponse.clone();
        caches.open(CACHE_NAME).then(cache => {
          cache.put(event.request, responseClone);
        });
        return networkResponse;
      })
      .catch(async () => {
        // Network failed, try cache
        const cachedResponse = await caches.match(event.request);
        if (cachedResponse) {
          return cachedResponse;
        }
        // If it's a navigation request and we have an offline page, show that
        if (event.request.mode === 'navigate') {
          return caches.match(OFFLINE_URL);
        }
        // Otherwise, return a generic error or empty response
        return new Response('Network error', { status: 408 });
      })
  );
});

// Background sync: retry failed POST requests when online
self.addEventListener('sync', event => {
  if (event.tag === 'sync-forms') {
    event.waitUntil(retryFailedSubmissions());
  }
});

async function retryFailedSubmissions() {
  const db = await openSubmissionDatabase();
  const failed = await db.getAllFailed();
  for (const submission of failed) {
    try {
      await fetch(submission.url, {
        method: 'POST',
        body: submission.data
      });
      await db.deleteSubmission(submission.id);
    } catch (err) {
      console.log('Retry failed, will try again next sync.');
    }
  }
}

// To register the service worker from your main page:
if ('serviceWorker' in navigator) {
  window.addEventListener('load', () => {
    navigator.serviceWorker.register('/service-worker.js')
      .then(registration => {
        console.log('ServiceWorker registered');
      })
      .catch(err => {
        console.log('ServiceWorker registration failed:', err);
      });
  });
}
Enter fullscreen mode Exit fullscreen mode

Each of these ten techniques addresses a different part of the network performance puzzle. Predictive loading anticipates needs. Lazy loading respects the viewport. Connection management optimizes throughput. Smart caching provides instant data. Resource hints give the browser a head start. Efficient formats reduce payload size. Batching cuts down on chatter. Progressive hydration ensures interactivity. Real-time monitoring allows adaptation. Service workers grant ultimate resilience.

Implementing all of them at once is not necessary. Start with one or two that address your biggest bottlenecks. Measure the impact using real user monitoring tools. The goal is not perfection, but a consistently good experience for every user, regardless of their connection. In my work, applying even a few of these methods has transformed slow, frustrating applications into fast, responsive tools that users enjoy. The network is a variable we cannot control, but how we interact with it is entirely in our hands.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Top comments (0)