DEV Community

Ashish Kumar
Ashish Kumar

Posted on • Originally published at renderlog.in

JavaScript GC: Pauses, Allocation Rate, Frontend Jank

Random 30–50ms freezes with no obvious long tasks in the Performance panel often have one root cause: the garbage collector. V8 pauses JavaScript execution to reclaim memory, and if your allocation rate is high enough, those pauses happen frequently — creating jank that shows up as a sawtooth pattern in the memory timeline rather than a spike in the flame chart.

What this covers: How V8's generational garbage collector works (minor vs major GC), why allocation rate matters more than total allocation, which frontend code patterns cause unnecessary GC pressure, and how object pooling eliminates it.

Diagram of a retained object graph: GC roots holding references that prevent the garbage collector from reclaiming objects in the heap.


V8's Generational Garbage Collector

Modern JavaScript engines like V8 (Chrome, Node.js, Deno) use a generational garbage collection model based on an empirical observation called the generational hypothesis: most objects die young. A large proportion of allocations are short-lived temporaries intermediate values in a computation, objects created inside a render function that are immediately discarded).

V8 exploits this by dividing the heap into two main regions:

Region Also called Size Object age
Young generation New space / Nursery 1-8 MB (configurable) Newly allocated objects
Old generation Old space Up to several GB Objects that survived young gen GC

Objects start in the young generation. If they survive two minor GC cycles, they're promoted to the old generation. The idea is that if an object has already survived two collections, it's probably going to be around for a while.


Minor GC (Scavenger): Cheap and Frequent

The Minor GC, also called the Scavenger, collects the young generation. It uses a semi-space algorithm: the young generation is split into two equal halves ("from-space" and "to-space"). Live objects are copied from from-space to to-space; dead objects are just abandoned (no need to explicitly free them). The roles then swap.

Copying is fast because you only touch live objects, not dead ones. The cost of minor GC is proportional to the number of surviving objects, not the total number of allocations. This is key: if you allocate 1,000 objects but 990 die before the next minor GC, the scavenger only copies 10 objects. Very fast.

However, minor GC still stops the world briefly execution halts while the collection runs. V8 has invested heavily in making this pause short (usually 1-5ms), but it still happens.

Minor GC is triggered when the young generation fills up. If you're allocating at a very high rate, the young generation fills up faster, and minor GC runs more frequently. More frequent pauses add up to more time spent paused overall, even if each individual pause is short.


Major GC (Mark-Compact): Expensive and Infrequent

When the old generation fills up either because many objects got promoted, or because the app holds a lot of long-lived state V8 triggers a Major GC, also called Mark-Compact.

Major GC has two phases:

  1. Mark: traverse the entire object graph from GC roots, marking everything that's reachable as alive
  2. Compact: move all live objects together to eliminate fragmentation, freeing the space where dead objects were

Marking the entire old generation is expensive it can take tens of milliseconds for a large heap. V8 uses several techniques to mitigate the pause:

  • Incremental marking: breaks the mark phase into small chunks interleaved with JavaScript execution
  • Concurrent marking: runs the marking work on background threads in parallel with JavaScript
  • Lazy sweeping: defers freeing dead objects until memory is actually needed

Despite these optimizations, major GC still causes noticeable pauses. A 30-50ms pause is not unusual for a large heap under memory pressure. At 60fps, a 50ms pause drops three frames.


Why Allocation Rate Matters

The core problem with high allocation rate is not memory usage it's GC frequency. Here's the chain:

  1. High allocation rate → young generation fills up faster
  2. Young generation fills up faster → minor GC runs more frequently
  3. More frequent minor GC → more total time paused per second
  4. More objects surviving (being referenced across GC cycles) → more objects promoted to old generation
  5. Old generation grows faster → major GC triggered sooner
  6. Major GC → potentially large pause

The sawtooth pattern I mentioned at the start 10MB allocated, then a drop is the visual signature of this cycle. The heap grows during allocation, then drops when the GC runs and reclaims memory. If the drops are regular and the saw teeth are sharp, you're on a GC treadmill.


What Causes High Allocation in Frontend Code

Objects in Render Functions

The most common source of excessive allocation in React apps is creating new objects on every render:

// Allocates a new array and new objects on every render
function Component({ items, filter }) {
  const filtered = items.filter(item => item.category === filter); // new array
  const mapped = filtered.map(item => ({                           // new array + new objects
    ...item,
    displayName: item.firstName + " " + item.lastName,            // new string
  }));

  return ;
}
Enter fullscreen mode Exit fullscreen mode

With useMemo, these allocations only happen when items or filter changes:

function Component({ items, filter }) {
  const mapped = useMemo(() => {
    return items
      .filter(item => item.category === filter)
      .map(item => ({
        ...item,
        displayName: `${item.firstName} ${item.lastName}`,
      }));
  }, [items, filter]);

  return ;
}
Enter fullscreen mode Exit fullscreen mode

Array Spreads and Immutable Updates

Immutable state updates are correct React practice, but they allocate new arrays and objects:

// Each dispatch allocates a new array
function reducer(state, action) {
  switch (action.type) {
    case "ADD_ITEM":
      return [...state, action.item]; // new array on every add
    case "UPDATE_ITEM":
      return state.map(item =>       // new array + potentially new objects
        item.id === action.id ? { ...item, ...action.updates } : item
      );
  }
}
Enter fullscreen mode Exit fullscreen mode

This is usually fine the GC handles short-lived allocations well. It becomes a problem at high frequency: if you're dispatching updates 60 times per second (a live data feed), you're allocating a new copy of the entire array 60 times per second.

For high-frequency updates, consider Immer (which uses structural sharing to minimize allocations) or switching to a mutable data structure with explicit dirty tracking.

String Concatenation in Loops

// Allocates a new string on every iteration
let result = "";
for (const item of largeArray) {
  result += item.name + ", "; // each += creates a new string object
}

// Better: single allocation at the end
const result = largeArray.map(item => item.name).join(", ");
Enter fullscreen mode Exit fullscreen mode

V8 is pretty good at optimizing string concatenation, but in tight loops with thousands of iterations, the garbage created by intermediate strings accumulates.

Temporary Objects in Event Handlers

// Allocates new event wrapper object on every mouse move
canvas.addEventListener("mousemove", (e) => {
  const point = { x: e.clientX, y: e.clientY }; // new object every event
  updateCursor(point);
});
Enter fullscreen mode Exit fullscreen mode

For high-frequency events (mousemove, scroll, touchmove), this pattern generates substantial GC pressure. Reuse a single object:

const point = { x: 0, y: 0 }; // allocated once outside the handler

canvas.addEventListener("mousemove", (e) => {
  point.x = e.clientX;
  point.y = e.clientY;
  updateCursor(point); // same object, mutated in place
});
Enter fullscreen mode Exit fullscreen mode

Object Pooling: Reusing Instead of Allocating

Object pooling is the pattern of maintaining a pool of pre-allocated objects and reusing them rather than allocating and discarding. It's standard practice in game engines and is applicable to high-frequency frontend work.

For a particle system on a canvas:

class ParticlePool {
  #pool = [];
  #active = new Set();

  acquire() {
    const particle = this.#pool.pop() ?? this.#createParticle();
    this.#active.add(particle);
    return particle;
  }

  release(particle) {
    this.#active.delete(particle);
    this.#pool.push(particle); // return to pool instead of discarding
  }

  #createParticle() {
    return { x: 0, y: 0, vx: 0, vy: 0, life: 0, alpha: 1 };
  }
}

const pool = new ParticlePool();

function spawnParticle(x, y) {
  const p = pool.acquire();
  p.x = x; p.y = y;
  p.vx = (Math.random() - 0.5) * 4;
  p.vy = -Math.random() * 6;
  p.life = 60;
  return p;
}

function updateParticles(particles) {
  for (const p of particles) {
    p.life--;
    if (p.life <= 0) {
      pool.release(p); // back to pool
      particles.delete(p);
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

This approach works best when:

  • You allocate and discard objects at very high frequency
  • Object creation cost or GC churn is measurably impacting frame time
  • Object size and shape are consistent (pooling works poorly with objects of varying structure)

For most React components and normal data flows, object pooling is overkill. Reserve it for animation loops, canvas rendering, and real-time data processing.


V8 Hidden Classes and Shape Optimization

V8 optimizes objects by tracking their "shape" the set of properties and their types. Objects with the same shape share a hidden class, allowing V8 to use efficient, array-like memory layouts and generate optimized machine code.

Two patterns break shape optimization:

Adding properties after construction:

// Bad: each assignment creates a new hidden class transition
function createUser(name, email) {
  const user = {};
  user.name = name;   // hidden class 1 → 2
  user.email = email; // hidden class 2 → 3
  return user;
}

// Good: initialize all properties in the constructor
function createUser(name, email) {
  return { name, email }; // single shape from creation
}
Enter fullscreen mode Exit fullscreen mode

Using the delete operator:

// Bad: delete creates a new hidden class and degrades to hash map
const obj = { x: 1, y: 2, z: 3 };
delete obj.y; // V8 can no longer use the optimized layout

// Better: set to undefined or null
obj.y = undefined;
Enter fullscreen mode Exit fullscreen mode

The performance impact of shape inconsistency is subtle but real in hot code paths. V8 deoptimizes functions that receive objects with different shapes (a process called deoptimization or "deopting"), falling back to generic, slower operations. You can spot deoptimizations in the Chrome DevTools Performance panel look for flame chart entries labeled Deoptimize in the JS call stack.


Using the Chrome Memory Panel

Three views are worth knowing:

View Use case
Heap Snapshot Point-in-time picture of what's alive; compare two snapshots to find leaks
Allocation Instrumentation on Timeline Records allocations over time; shows which functions allocate the most
Allocation Sampling Statistical sample of allocations; low overhead, good for production-like profiling

For GC pause investigation, open the Performance panel and check Memory in the capture options. The resulting flame chart shows the heap size over time alongside JS execution. GC pauses appear as gaps in JavaScript execution where the heap drops sharply. You can see exactly which code runs before a GC pause and confirm whether your rendering/event handling code is causing the sawtooth.


Frontend-Specific Memory Pressure Tips

Pattern Recommendation
Array spreads in reducers Fine at normal frequency; use Immer for high-frequency update paths
new objects in render Memoize with useMemo if the component renders frequently
Closures in tight loops Extract the function outside the loop or use a named function
Binary data (audio, image, WebGL) Always use TypedArray (Float32Array, Uint8Array) no GC overhead
DOM element lists Pool and reuse DOM elements for virtual lists; don't create/destroy thousands of nodes
Module-level caches Add max size with LRU eviction; use WeakMap when keys are objects

TypedArrays deserve a special mention. For binary data processing audio buffers, image pixel data, WebGL vertex buffers Float32Array and Uint8Array are backed by fixed-size ArrayBuffer objects that V8 manages outside the normal GC heap. They don't participate in the generational collection process the same way ordinary objects do. Allocating a Float32Array for each audio frame is still wasteful, but the GC characteristics are fundamentally different from allocating regular JavaScript objects.


Summary

The GC doesn't cause the allocation problem — your code does. The GC collects what you make it collect. At 200 particles per frame at 60fps, allocating a new object for each particle creates 12,000 allocations per second — the young generation fills every 2 seconds like clockwork. Object pooling (300 pre-allocated objects, acquired at spawn, released on death) reduces GC pressure by ~90% in this pattern. The sawtooth disappears from the memory timeline.

Understanding the collector tells you exactly where to look: measure allocation rate first, then identify which hot code paths are generating the garbage.


Read the original article on Renderlog.in:
https://renderlog.in/blog/javascript-garbage-collection-frontend/

If you found this helpful, I've also built some free tools for developers and everyday users. Feel free to try them once:

JSON Tools: https://json.renderlog.in
Text Tools: https://text.renderlog.in
QR Tools: https://qr.renderlog.in

Top comments (0)