DEV Community

Cover image for 8 Web Worker Techniques That Stop JavaScript From Freezing Your Browser
Nithin Bharadwaj
Nithin Bharadwaj

Posted on

8 Web Worker Techniques That Stop JavaScript From Freezing Your Browser

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

I remember the exact moment I realized JavaScript had a serious problem. I was building a photo filter app. I applied a blur effect to a large image, and the whole page froze. I couldn't scroll, click, or interact. The browser tab turned white, and the operating system eventually asked me to kill it. That’s when I learned that JavaScript runs on a single thread. Any heavy computation blocks the user interface, making the application feel broken. Web Workers came to my rescue. They let me run scripts in the background without touching the main thread. Over time I discovered eight techniques that turned me from a frustrated developer into someone who could handle massive CPU work without breaking a sweat. I want to share them with you in the simplest way possible.

The first technique is managing the worker lifecycle. A worker is like a tiny program that runs in its own world. You create it, give it a task, and eventually kill it when you are done. But if you just create workers one after another without a plan, you end up with memory leaks and zombie threads. I built a worker factory that makes creating and destroying workers neat. Instead of writing a separate file for every worker, I generate the worker code on the fly using a Blob URL. The factory stores each worker in a map so I can track it. When I finish, I release the memory by revoking the Blob URL and terminating the worker. This keeps my application clean.

class WorkerFactory {
  constructor() {
    this.workers = new Map();
  }

  createFromFunction(fn, options = {}) {
    const code = `self.onmessage = async (e) => {
      const result = await (${fn.toString()})(e.data);
      self.postMessage(result);
    }`;
    const blob = new Blob([code], { type: 'application/javascript' });
    const url = URL.createObjectURL(blob);
    const worker = new Worker(url, options);
    this.workers.set(worker, { url, busy: false });
    return worker;
  }

  createFromURL(url, options = {}) {
    const worker = new Worker(url, options);
    this.workers.set(worker, { url, busy: false });
    return worker;
  }

  terminate(worker) {
    const entry = this.workers.get(worker);
    if (entry) {
      URL.revokeObjectURL(entry.url);
      worker.terminate();
      this.workers.delete(worker);
    }
  }

  terminateAll() {
    for (const [worker, entry] of this.workers) {
      URL.revokeObjectURL(entry.url);
      worker.terminate();
    }
    this.workers.clear();
  }
}
Enter fullscreen mode Exit fullscreen mode

Now that I had control over individual workers, I faced a second problem. If I spawned too many workers at once, the browser would choke. My CPU would scream. So I built a thread pool. A thread pool holds a fixed number of workers, usually matching the number of CPU cores. Tasks go into a queue. Whenever a worker becomes idle, it picks the next task from the queue. This keeps the system busy but not overloaded. I added priority support because some tasks are more urgent than others. The pool sorts the queue by priority before dispatching. This pattern is the backbone of any serious parallel processing in JavaScript.

class WorkerPool {
  constructor(workerFactory, size = navigator.hardwareConcurrency || 4) {
    this.workers = [];
    this.queue = [];
    this.resolvers = new Map();
    this.idCounter = 0;

    for (let i = 0; i < size; i++) {
      const worker = workerFactory.createFromFunction(this.taskRunner);
      worker.onmessage = (e) => this.handleResult(e.data);
      this.workers.push({ worker, busy: false });
    }
  }

  static taskRunner(data) {
    return data;
  }

  execute(data, priority = 0) {
    return new Promise((resolve, reject) => {
      const id = this.idCounter++;
      this.resolvers.set(id, { resolve, reject });
      this.queue.push({ id, data, priority });
      this.dispatch();
    });
  }

  dispatch() {
    if (this.queue.length === 0) return;
    const idleWorker = this.workers.find(w => !w.busy);
    if (!idleWorker) return;

    this.queue.sort((a, b) => b.priority - a.priority);
    const task = this.queue.shift();
    idleWorker.busy = true;
    idleWorker.worker.postMessage({ id: task.id, data: task.data });
  }

  handleResult({ id, result, error }) {
    const resolver = this.resolvers.get(id);
    if (resolver) {
      this.resolvers.delete(id);
      if (error) resolver.reject(new Error(error));
      else resolver.resolve(result);
    }

    for (const w of this.workers) {
      if (w.busy) {
        w.busy = false;
        break;
      }
    }
    this.dispatch();
  }

  terminate() {
    for (const { worker } of this.workers) {
      worker.terminate();
    }
    this.workers = [];
    this.queue = [];
    this.resolvers.clear();
  }
}
Enter fullscreen mode Exit fullscreen mode

The third technique changed my life: transferable objects. When you send data to a worker, JavaScript normally copies it. That copying takes time and memory, especially with large arrays. Transferable objects let you give the data to the worker without copying. The ownership moves from the main thread to the worker. The original array becomes empty instantly. For image processing or binary data, this is a huge speed boost. I remember processing a 4K image in a few milliseconds instead of seconds. You just list the buffers in the second argument of postMessage.

function processImage(imageData) {
  const buffer = imageData.data.buffer;
  const worker = new Worker('image-worker.js');
  worker.postMessage({ imageData: buffer }, [buffer]);
  return new Promise(resolve => {
    worker.onmessage = (e) => {
      const resultBuffer = e.data;
      worker.terminate();
      const resultData = new ImageData(
        new Uint8ClampedArray(resultBuffer),
        imageData.width,
        imageData.height
      );
      resolve(resultData);
    };
  });
}

// image-worker.js
self.onmessage = (e) => {
  const buffer = e.data.imageData;
  const pixels = new Uint8ClampedArray(buffer);
  for (let i = 0; i < pixels.length; i += 4) {
    const gray = 0.3 * pixels[i] + 0.59 * pixels[i+1] + 0.11 * pixels[i+2];
    pixels[i] = gray;
    pixels[i+1] = gray;
    pixels[i+2] = gray;
  }
  self.postMessage(buffer, [buffer]);
};
Enter fullscreen mode Exit fullscreen mode

The fourth technique is SharedArrayBuffer. This was the scary one at first. SharedArrayBuffer lets multiple workers read and write the same memory without sending messages. It’s true parallel processing. But you have to be careful because two workers might write to the same spot at the same time. That causes race conditions. JavaScript gives you atomic operations through the Atomics object to prevent that. I use a simple spinlock to protect critical sections. SharedArrayBuffer is perfect for real-time audio processing or physics simulations where you need constant data flow without the overhead of messages.

const sharedBuffer = new SharedArrayBuffer(1000 * 8);
const sharedArray = new Float64Array(sharedBuffer);
const lock = new Int32Array(sharedBuffer, 0, 1);

function acquireLock() {
  while (Atomics.compareExchange(lock, 0, 0, 1) !== 0) {}
}

function releaseLock() {
  Atomics.store(lock, 0, 0);
}

// Worker 1 writes
acquireLock();
sharedArray[0] = Math.random();
releaseLock();

// Worker 2 reads
acquireLock();
const value = sharedArray[0];
releaseLock();
Enter fullscreen mode Exit fullscreen mode

The fifth technique is task scheduling and prioritization. Sending everything to a pool works, but sometimes you need to pause, cancel, or throttle tasks. I built a scheduler that breaks a big job into small chunks and sends them to the pool one by one. The main thread can check if the browser is idle with requestIdleCallback before scheduling the next chunk. This keeps the UI responsive even during heavy work. If the user navigates away, I cancel all pending tasks. The scheduler gives me fine control over what runs and when.

class Scheduler {
  constructor(pool) {
    this.pool = pool;
    this.pending = new Map();
  }

  schedule(data, priority, onProgress) {
    const id = this.generateId();
    this.pending.set(id, { data, priority, onProgress, cancelled: false });
    this.processQueue();
    return id;
  }

  async processQueue() {
    for (const [id, task] of this.pending) {
      if (task.cancelled) continue;
      this.pending.delete(id);
      try {
        const result = await this.pool.execute(task.data, task.priority);
        if (!task.cancelled && task.onProgress) task.onProgress(1);
        return result;
      } catch (e) {
        console.error(e);
      }
    }
  }

  cancel(id) {
    const task = this.pending.get(id);
    if (task) {
      task.cancelled = true;
      this.pending.delete(id);
    }
  }

  generateId() {
    return Date.now().toString(36) + Math.random().toString(36).substr(2);
  }
}
Enter fullscreen mode Exit fullscreen mode

The sixth technique is error handling and recovery. Workers can crash. Network requests inside a worker can fail. If you don’t handle errors, your whole application might hang. I wrote a function that creates a resilient worker. It listens for the onerror event. When a worker fails, it terminates the broken worker and spins up a new one. It also retries the task up to a maximum number of times. Inside the worker, I wrap the logic in a try-catch and post back an error object. The main thread checks for errors and rejects the promise. This pattern saved me many times during long batch processes.

function createResilientWorker(url, options = {}) {
  const maxRetries = options.maxRetries || 3;
  let retries = 0;

  function startWorker() {
    const worker = new Worker(url);
    worker.onerror = (event) => {
      console.error('Worker error:', event.message);
      if (retries < maxRetries) {
        retries++;
        worker.terminate();
        startWorker();
      }
    };
    worker.onmessageerror = (event) => {
      console.error('Worker message error:', event.data);
    };
    return worker;
  }

  return startWorker();
}
Enter fullscreen mode Exit fullscreen mode

The seventh technique is worker nesting. Sometimes a single worker is not enough. For very large matrix multiplications, I had a master worker that spawned its own child workers. The master receives a big matrix, splits it into chunks, and sends each chunk to a child worker. The children compute in parallel, then the master combines the results. Worker nesting allows hierarchical parallelism. You have to be careful about resource limits, but it works beautifully. I used importScripts inside the master to share helper functions with the children.

// main-worker.js
self.onmessage = function(e) {
  const { matrix, rows, cols } = e.data;
  const chunkSize = Math.floor(rows / navigator.hardwareConcurrency);
  const workers = [];
  const results = [];

  for (let i = 0; i < navigator.hardwareConcurrency; i++) {
    const startRow = i * chunkSize;
    const endRow = (i === navigator.hardwareConcurrency - 1) ? rows : startRow + chunkSize;
    const worker = new Worker('child-worker.js');
    worker.postMessage({ matrix: matrix.slice(startRow * cols, endRow * cols), cols });
    workers.push(worker);
  }

  workers.forEach((worker, index) => {
    worker.onmessage = (e) => {
      results[index] = e.data;
      if (results.length === workers.length) {
        const combined = results.flat();
        self.postMessage(combined);
        workers.forEach(w => w.terminate());
      }
    };
  });
};
Enter fullscreen mode Exit fullscreen mode

The eighth technique is progress reporting. Users hate staring at a blank screen. They want to know how far along a task is. Inside a worker, I send periodic messages with a progress value between 0 and 1. The main thread updates a progress bar. I throttle the messages so they don’t flood the main thread. For example, I send a progress update only every 100 iterations. This keeps the UI responsive and gives the user a sense of progress. I even send an estimated time remaining. It makes the application feel professional and trustworthy.

// worker.js
self.onmessage = async (e) => {
  const total = e.data.total;
  for (let i = 0; i < total; i++) {
    await new Promise(resolve => setTimeout(resolve, 10));
    if (i % 100 === 0) {
      self.postMessage({ progress: i / total });
    }
  }
  self.postMessage({ progress: 1, done: true });
};

// main.js
const worker = new Worker('worker.js');
worker.onmessage = (e) => {
  if (e.data.progress !== undefined) {
    updateProgressBar(e.data.progress);
    if (e.data.done) {
      console.log('Complete');
    }
  }
};
Enter fullscreen mode Exit fullscreen mode

These eight techniques gave me the power to write applications that handle millions of data points without freezing. But the story does not end there. I also needed to integrate this with React. I created a custom hook that wraps a worker pool. The hook creates the pool when the component mounts and destroys it on unmount. It exposes an execute function and a status state. Inside the hook, I handle loading, success, and error states gracefully. Now I can use Web Workers in any React component with just a few lines.

function useWorkerPool(workerFactory, poolSize) {
  const poolRef = useRef(null);
  const [status, setStatus] = useState('idle');

  useEffect(() => {
    poolRef.current = new WorkerPool(workerFactory, poolSize);
    return () => poolRef.current?.terminate();
  }, [workerFactory, poolSize]);

  const execute = useCallback(async (data) => {
    setStatus('running');
    try {
      const result = await poolRef.current.execute(data);
      setStatus('idle');
      return result;
    } catch (error) {
      setStatus('error');
      throw error;
    }
  }, []);

  return { execute, status };
}
Enter fullscreen mode Exit fullscreen mode

I use this hook in my image processing app. The main thread never blocks. The user can scroll and click while a heavy filter applies. The progress bar fills smoothly. I even combined SharedArrayBuffer with a thread pool to build a real-time audio visualizer. The experience changed how I see JavaScript. It is not just a language for clicking buttons. It can do serious number crunching. But you have to treat the workers with respect. Manage their lifecycles. Pool them. Transfer data efficiently. Protect shared memory. Handle errors. Report progress. When you do all that, you get a responsive, fast, and professional application.

I hope you try these techniques yourself. Start with a simple worker factory and a thread pool. Then experiment with transferable objects. Once you feel comfortable, explore SharedArrayBuffer. Build a scheduler. Make your workers resilient. Try nesting them. Add a progress bar. And finally, integrate it all into your favorite framework. Your users will thank you. Your CPU will run cooler. And you will never fear a long computation again.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Top comments (0)