DEV Community

Cover image for JavaScript Concurrency Patterns: Web Workers, Atomics & SharedArrayBuffer
Patoliya Infotech
Patoliya Infotech

Posted on

JavaScript Concurrency Patterns: Web Workers, Atomics & SharedArrayBuffer

TL;DR: JavaScript is single-threaded, until it isn't. Web Workers, SharedArrayBuffer, and Atomics let you break out of the main thread and do real parallel work. This article covers everything from spawning your first worker to coordinating shared memory safely with atomic operations.

The Myth of the Single Thread

Every JavaScript developer has heard it: "JS is single-threaded." This is true for your application code on the main thread, but it's never been the full story.

The browser itself is heavily multi-threaded. The compositor, the network stack, the GPU process, all parallel. What was missing was a first-class API to let your JavaScript code participate in that parallelism.

If you're evaluating how concurrency fits into your broader web application architecture, understanding browser threading primitives is foundational, especially for performance-sensitive products.

That changed with:

API Introduced Purpose
Web Workers 2009 (HTML5) Run JS in a separate thread
SharedArrayBuffer ES2017 Share raw memory between threads
Atomics ES2017 Thread-safe operations on shared memory
Comlink (library) 2017 RPC-style Worker abstraction

Before we go further, let's frame the concurrency model clearly:

Main Thread          Worker Thread(s)
─────────────        ─────────────────
UI rendering         CPU-heavy tasks
Event loop           No DOM access
postMessage ◄──────► postMessage
SharedArrayBuffer ◄─► SharedArrayBuffer  (shared raw memory)
Enter fullscreen mode Exit fullscreen mode

Web Workers

A Web Worker is a script that runs in a background OS thread, completely separate from the main thread. It has no access to the DOM, window, or document, but it has full access to fetch, WebSockets, IndexedDB, Canvas (OffscreenCanvas), and more.

Note: Web Workers are a browser API. On the server side, Node.js has its own equivalent, worker_threads, which follows the same shared-memory model with SharedArrayBuffer and Atomics.

Spawning a Worker

// main.js
const worker = new Worker('./worker.js', { type: 'module' });

worker.postMessage({ task: 'heavyCompute', payload: largeArray });

worker.onmessage = ({ data }) => {
  console.log('Result from worker:', data.result);
};

worker.onerror = (err) => {
  console.error('Worker error:', err.message);
};
Enter fullscreen mode Exit fullscreen mode
// worker.js
self.onmessage = ({ data }) => {
  if (data.task === 'heavyCompute') {
    const result = data.payload.reduce((acc, val) => acc + val, 0);
    self.postMessage({ result });
  }
};
Enter fullscreen mode Exit fullscreen mode

Gotcha: Workers are expensive to spawn (~50–100ms). Always reuse them via a pool rather than creating a new one per task.

Inline Workers with Blob URLs

Sometimes you want to define a worker inline rather than in a separate file:

const workerCode = `
  self.onmessage = ({ data }) => {
    const result = data.map(n => n * n);
    self.postMessage(result);
  };
`;

const blob = new Blob([workerCode], { type: 'application/javascript' });
const worker = new Worker(URL.createObjectURL(blob));
Enter fullscreen mode Exit fullscreen mode

This is handy for libraries shipping as a single bundle.

Module Workers

With { type: 'module' }, workers can use ES module syntax (import/export):

// worker.js (module worker)
import { heavyTransform } from './transforms.js';

self.onmessage = async ({ data }) => {
  const result = await heavyTransform(data);
  self.postMessage(result);
};
Enter fullscreen mode Exit fullscreen mode

Module workers pair naturally with modern frontend frameworks. If you're building with React or Next.js, bundlers like Vite and webpack have first-class support for module workers via new Worker(new URL('./worker.js', import.meta.url)).

Structured Clone & Transferables

By default, postMessage deep-copies data using the Structured Clone Algorithm. This supports most JS types - objects, arrays, Map, Set, ArrayBuffer, Blob, ImageData - but not functions, DOM nodes, or class instances with prototypes.

The Cost of Copying

For large data, cloning is expensive:

const bigArray = new Float64Array(10_000_000); // ~80 MB

// ❌ This COPIES 80MB - slow
worker.postMessage(bigArray);

// ✅ This TRANSFERS the buffer - zero-copy, near-instant
worker.postMessage(bigArray, [bigArray.buffer]);
// After transfer, bigArray.byteLength === 0 in the sender
Enter fullscreen mode Exit fullscreen mode

Transferable Objects

Transferables are zero-copy - ownership moves from one thread to another. The sender can no longer access the buffer after transferring it.

Transferable types include:

  • ArrayBuffer
  • MessagePort
  • ImageBitmap
  • OffscreenCanvas
  • ReadableStream / WritableStream / TransformStream
// Transfer an OffscreenCanvas to a worker for GPU rendering
const canvas = document.querySelector('canvas');
const offscreen = canvas.transferControlToOffscreen();

worker.postMessage({ canvas: offscreen }, [offscreen]);
Enter fullscreen mode Exit fullscreen mode

SharedArrayBuffer

SharedArrayBuffer (SAB) is the game-changer. Unlike transferable ownership, SAB lets multiple threads read and write the same memory simultaneously.

// main.js
const sab = new SharedArrayBuffer(4 * Int32Array.BYTES_PER_ELEMENT);
const sharedArr = new Int32Array(sab);

// Pass the SAB to workers - they all see the SAME memory
workerA.postMessage({ sharedArr });
workerB.postMessage({ sharedArr });
Enter fullscreen mode Exit fullscreen mode
// workerA.js
self.onmessage = ({ data }) => {
  const arr = new Int32Array(data.sharedArr.buffer);
  arr[0] = 42; // Writes directly to shared memory
};
Enter fullscreen mode Exit fullscreen mode
// workerB.js
self.onmessage = ({ data }) => {
  const arr = new Int32Array(data.sharedArr.buffer);
  console.log(arr[0]); // May read 0 or 42 depending on timing!
};
Enter fullscreen mode Exit fullscreen mode

This is where things get dangerous. Without synchronization, you have a data race.

Atomics

Atomics provides thread-safe, indivisible operations on SharedArrayBuffer-backed typed arrays. All Atomics operations are guaranteed to be:

  1. Atomic - no torn reads/writes
  2. Sequentially consistent - operations appear in a defined order

Core Atomics API

const sab = new SharedArrayBuffer(4);
const arr = new Int32Array(sab);

// Atomic read/write
Atomics.store(arr, 0, 99);    // Write 99 at index 0
Atomics.load(arr, 0);         // Read index 0 → 99

// Arithmetic (returns OLD value)
Atomics.add(arr, 0, 1);       // arr[0]++
Atomics.sub(arr, 0, 5);       // arr[0] -= 5
Atomics.and(arr, 0, 0b1111);  // Bitwise AND
Atomics.or(arr, 0, 0b0001);   // Bitwise OR
Atomics.xor(arr, 0, 0b1010);  // Bitwise XOR

// Compare-and-swap (the foundation of all locks)
// If arr[0] === expectedValue, set it to newValue
// Returns the OLD value regardless
Atomics.compareExchange(arr, 0, expectedValue, newValue);

// Exchange - always writes, returns old value
Atomics.exchange(arr, 0, newValue);
Enter fullscreen mode Exit fullscreen mode

Wait & Notify (The Mutex Primitives)

// Worker thread - blocks until notified (like a condition variable)
// Atomics.wait() is BLOCKING - never call on the main thread!
const result = Atomics.wait(arr, 0, 0, 5000); // Wait for arr[0] to change from 0 (timeout: 5s)
// result: "ok" | "not-equal" | "timed-out"

// Main thread or other worker - wakes up waiters
Atomics.notify(arr, 0, 1); // Wake up 1 waiter at index 0
Enter fullscreen mode Exit fullscreen mode

Key insight: Atomics.wait() is synchronous and blocks the thread. This is intentional - it's how you implement efficient mutex sleep without spinning. Never call it on the main thread (it'll throw or deadlock). Use Atomics.waitAsync() on the main thread instead.

waitAsync - Non-Blocking Wait

// Safe to use on the main thread
const { async, value } = Atomics.waitAsync(arr, 0, 0);

if (async) {
  value.then(result => {
    console.log('Worker notified us:', result);
  });
}
Enter fullscreen mode Exit fullscreen mode

Concurrency Patterns in Practice

Pattern 1: Worker Pool

Spawning workers is costly. A worker pool reuses a fixed set of workers, queuing tasks and dispatching to idle workers.

// worker-pool.js
class WorkerPool {
  #workers = [];
  #queue = [];
  #idle = [];

  constructor(workerUrl, size = navigator.hardwareConcurrency) {
    for (let i = 0; i < size; i++) {
      const worker = new Worker(workerUrl, { type: 'module' });
      worker.onmessage = ({ data }) => this.#onResult(worker, data);
      this.#workers.push(worker);
      this.#idle.push(worker);
    }
  }

  run(payload) {
    return new Promise((resolve, reject) => {
      this.#queue.push({ payload, resolve, reject });
      this.#dispatch();
    });
  }

  #dispatch() {
    while (this.#idle.length && this.#queue.length) {
      const worker = this.#idle.pop();
      const { payload, resolve, reject } = this.#queue.shift();
      worker._resolve = resolve;
      worker._reject = reject;
      worker.postMessage(payload);
    }
  }

  #onResult(worker, data) {
    worker._resolve(data);
    this.#idle.push(worker);
    this.#dispatch();
  }

  terminate() {
    this.#workers.forEach(w => w.terminate());
  }
}

// Usage
const pool = new WorkerPool('./compute-worker.js');

const results = await Promise.all(
  chunks.map(chunk => pool.run(chunk))
);
Enter fullscreen mode Exit fullscreen mode

Pattern 2: Producer-Consumer with SharedArrayBuffer

A classic ring buffer shared between a producer and consumer:

// shared-ring-buffer.js
const BUFFER_SIZE = 1024;
const sab = new SharedArrayBuffer(
  2 * Int32Array.BYTES_PER_ELEMENT + // [readHead, writeHead]
  BUFFER_SIZE * Float64Array.BYTES_PER_ELEMENT
);

const ctrl = new Int32Array(sab, 0, 2);       // [read, write] indices
const data = new Float64Array(sab, 8, BUFFER_SIZE);

// Producer (worker A)
function produce(value) {
  const write = Atomics.load(ctrl, 1);
  const nextWrite = (write + 1) % BUFFER_SIZE;

  // Spin-wait if buffer full (read === nextWrite)
  while (Atomics.load(ctrl, 0) === nextWrite) {
    // Could use Atomics.wait here for efficiency
  }

  data[write] = value;
  Atomics.store(ctrl, 1, nextWrite);
  Atomics.notify(ctrl, 0); // Notify consumer
}

// Consumer (worker B)
function consume() {
  const read = Atomics.load(ctrl, 0);
  const write = Atomics.load(ctrl, 1);

  if (read === write) {
    // Buffer empty - wait
    Atomics.wait(ctrl, 0, read);
  }

  const value = data[read];
  Atomics.store(ctrl, 0, (read + 1) % BUFFER_SIZE);
  return value;
}
Enter fullscreen mode Exit fullscreen mode

Pattern 3: Mutex (Mutual Exclusion Lock)

A mutex ensures only one thread enters a critical section at a time:

// mutex.js
const UNLOCKED = 0;
const LOCKED = 1;

class Mutex {
  #sab;
  #lockArr;

  constructor(sab, byteOffset = 0) {
    this.#sab = sab;
    this.#lockArr = new Int32Array(sab, byteOffset, 1);
  }

  lock() {
    // Spin until we successfully CAS UNLOCKED → LOCKED
    while (true) {
      const old = Atomics.compareExchange(
        this.#lockArr, 0, UNLOCKED, LOCKED
      );

      if (old === UNLOCKED) return; // We got the lock

      // Lock is held, wait efficiently (not spin)
      Atomics.wait(this.#lockArr, 0, LOCKED);
    }
  }

  unlock() {
    Atomics.store(this.#lockArr, 0, UNLOCKED);
    Atomics.notify(this.#lockArr, 0, 1); // Wake one waiter
  }

  withLock(fn) {
    this.lock();
    try {
      return fn();
    } finally {
      this.unlock();
    }
  }
}

// Usage in a worker
const mutex = new Mutex(sharedControlBuffer);

mutex.withLock(() => {
  // Only ONE worker executes this at a time
  sharedCounter[0]++;
});
Enter fullscreen mode Exit fullscreen mode

Pattern 4: Message Passing vs. Shared Memory, When to Use Which

Criterion Message Passing (postMessage) Shared Memory (SAB + Atomics)
Simplicity ✅ Easy, safe by default ❌ Complex, error-prone
Data size ❌ Copies data (or needs transfer) ✅ Zero-copy reads/writes
Latency ❌ Async round-trip ✅ Synchronous, near-zero overhead
Coordination ✅ Implicit via message ordering ❌ Explicit locks required
Debugging ✅ Easier to reason about ❌ Heisenbugs from race conditions
Best for Task dispatch, results High-frequency, low-latency data exchange

Rule of thumb: Start with postMessage. Only reach for SharedArrayBuffer when you have a proven performance bottleneck involving large, frequently updated data.

Performance & When to Use What

The Amdahl's Law Reality Check

Not everything benefits from parallelism. If 90% of your code is parallelizable and runs on 4 cores:

Speedup = 1 / (0.10 + 0.90/4) = 1 / (0.10 + 0.225) = ~3.08×
Enter fullscreen mode Exit fullscreen mode

You won't get a 4× speedup, you'll get 3×. Serial bottlenecks dominate.

When Workers Help

CPU-bound tasks: Image/video processing, cryptography, compression, physics simulation, ML inference, data parsing (CSV/JSON at scale), WebAssembly compute.

I/O-bound tasks: fetch, file reads, WebSockets, these are already async and non-blocking on the main thread. Workers don't help here.

For teams building frontend applications from scratch, understanding this CPU-bound vs I/O-bound distinction early will save you from premature optimization traps later.

Benchmark: Main Thread vs. Worker

// Mandelbrot set - 1000x1000 pixels
// Main thread: ~850ms (blocks UI)
// 4 Workers splitting rows: ~240ms (3.5× speedup, UI stays responsive)
Enter fullscreen mode Exit fullscreen mode

Security & Headers

After the Spectre attack in 2018, browsers disabled SharedArrayBuffer by default. To re-enable it, your page must be served in a cross-origin isolated context:

# Required HTTP response headers on your server
Cross-Origin-Opener-Policy: same-origin
Cross-Origin-Embedder-Policy: require-corp
Enter fullscreen mode Exit fullscreen mode

Configuring these headers correctly is part of a broader DevOps and infrastructure discipline. In CI/CD pipelines, headers like COOP/COEP should be validated as part of your deployment checks, not discovered manually post-launch.

You can verify isolation in the browser:

if (crossOriginIsolated) {
  console.log('SharedArrayBuffer is available ✅');
} else {
  console.warn('Not cross-origin isolated, SAB unavailable ❌');
}
Enter fullscreen mode Exit fullscreen mode

Note: These headers have implications. COEP: require-corp means all sub-resources (images, scripts, fonts) must opt in with Cross-Origin-Resource-Policy: cross-origin or be same-origin. Plan your CDN strategy accordingly.

For local development with Vite:

// vite.config.js
export default {
  server: {
    headers: {
      'Cross-Origin-Opener-Policy': 'same-origin',
      'Cross-Origin-Embedder-Policy': 'require-corp',
    },
  },
};
Enter fullscreen mode Exit fullscreen mode

Automating header injection in staging/production environments is a great candidate for your CI/CD pipeline. A broken COEP header silently kills SharedArrayBuffer in production with no obvious error catch it in the pipeline.

Debugging Concurrent Code

Concurrent bugs are notoriously hard to reproduce. Here's your toolkit:

Chrome DevTools

  • Open Sources → Threads panel to inspect all worker threads simultaneously.
  • You can pause execution per thread and inspect their individual call stacks.
  • The Performance tab shows main thread vs. worker timelines side by side.

Console Logging from Workers

// worker.js
self.name = 'DataProcessor'; // Set a name for DevTools identification

console.log('[Worker]', self.name, 'processing chunk');
Enter fullscreen mode Exit fullscreen mode

Deterministic Testing

Race conditions disappear when you run tests. Use --single-process flags or in-process Worker mocks:

// In Jest, mock workers to run synchronously
jest.mock('./worker.js', () => ({
  postMessage: jest.fn(data => {
    // Simulate worker inline
    const result = heavyCompute(data);
    mockWorker.onmessage({ data: result });
  }),
}));
Enter fullscreen mode Exit fullscreen mode

Concurrency bugs are among the hardest to catch in automated test suites. If your team is investing in software testing and QA, make sure your test strategy explicitly covers multi-threaded scenarios, stress tests, load tests, and race condition simulations should all be part of the plan.

Detecting Data Races

Unfortunately, JavaScript has no built-in race detector (unlike Go's -race flag or Rust's borrow checker). Your best bet:

  1. Wrap all shared memory access in Atomics, always, no exceptions.
  2. Use TypeScript to enforce typed access to shared buffers.
  3. Log + replay, record all operations with timestamps and replay them to find ordering issues.

Real-World Use Cases

1. Image Processing Pipeline (Photopea-style)

Main Thread: Upload → slice into rows → dispatch to 8 workers
Workers:     Apply filters (blur, sharpen, color grading) on row chunks
Main Thread: Reassemble rows → render to canvas
Enter fullscreen mode Exit fullscreen mode

Using SharedArrayBuffer for the image pixel data + Atomics to track which rows are done eliminates all the copying overhead of postMessage.

2. In-Browser ML Inference (TensorFlow.js / ONNX)

Running a model on the main thread blocks the UI for hundreds of milliseconds. Offloading to a Worker with WebAssembly + SharedArrayBuffer for the tensor data keeps the UI at 60fps.

The intersection of browser performance and artificial intelligence is evolving fast. Concurrency primitives like SharedArrayBuffer are what make running LLM inference, image segmentation, or real-time NLP in the browser actually viable, not just a demo trick.

3. Real-Time Audio Processing

The AudioWorklet (a specialized worker for the Web Audio API) processes audio in 128-sample chunks at 44,100 Hz. SharedArrayBuffer lets the main thread and audio worklet exchange parameter data without the garbage-collector-induced jitter that postMessage can cause.

// audio-processor.js (AudioWorkletProcessor)
class GainProcessor extends AudioWorkletProcessor {
  process(inputs, outputs, parameters) {
    const input = inputs[0];
    const output = outputs[0];

    for (let ch = 0; ch < input.length; ch++) {
      for (let i = 0; i < input[ch].length; i++) {
        output[ch][i] = input[ch][i] * 0.5;
      }
    }
    return true;
  }
}
registerProcessor('gain-processor', GainProcessor);
Enter fullscreen mode Exit fullscreen mode

4. Game Physics Engine

Split your game world into spatial sectors. Each Worker owns a sector, computes physics, then writes collision results back to a SharedArrayBuffer grid. The main thread reads the final positions for rendering, no back-and-forth messaging needed.

These same concurrency principles apply in React Native via the JSI (JavaScript Interface) and Worklets (used by Reanimated 3). If you're building performance-critical mobile apps, the mental model of off-main-thread work is identical.

Conclusion

JavaScript's concurrency story has matured enormously. Here's the mental model to carry forward:

┌─────────────────────────────────────────────────────┐
│  Problem                  │  Tool                   │
├───────────────────────────┼─────────────────────────┤
│  Offload CPU work         │  Web Worker             │
│  Large data, ownership    │  Transferable Objects   │
│  Shared state, low-latency│  SharedArrayBuffer      │
│  Safe shared state ops    │  Atomics                │
│  Thread coordination      │  Atomics.wait/notify    │
│  Ergonomic Workers        │  Comlink (library)      │
└─────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Key takeaways:

  • Web Workers are the foundation, use them for anything CPU-bound that shouldn't block the UI.
  • Transferable objects are your first optimization, eliminate copy overhead for large buffers.
  • SharedArrayBuffer unlocks true shared memory, but demands discipline. Don't use it unless you've profiled and found a genuine bottleneck.
  • Atomics are the only safe way to access shared memory. No raw reads/writes without synchronization, ever.
  • Always set COOP/COEP headers to enable cross-origin isolation before using SAB.

The web platform now gives you the tools to write genuinely parallel code. Use them thoughtfully, and your users will feel the difference.

Further Reading & Resources

If this article sparked ideas about how to apply parallelism in production systems, here are some directions worth exploring:

Top comments (0)