DEV Community

Cover image for I Crashed a Mobile Browser With One CSS Property. Here's What I Learned About Rendering.
Olexandr Uvarov
Olexandr Uvarov

Posted on

I Crashed a Mobile Browser With One CSS Property. Here's What I Learned About Rendering.

A few months ago, I was debugging a checkout flow on mobile. Everything looked fine on desktop — smooth step transitions, nice fade-ins, snappy UI. Then QA pinged me: "the payment page freezes on iPhone SE for ~2 seconds when opening."

I opened Web Inspector, navigated to the Layers panel, and my jaw dropped. What should have been 3–4 composite layers turned into 34. Every pricing card, every badge, every animated element had been silently promoted to its own GPU layer. The page was eating over 200 MB of GPU memory just to show a pricing table.

The fix? Moving one z-index and removing two will-change declarations. Three lines of CSS.

But to understand why that worked, you need to understand how browsers actually render pages. And that's what this article is about.


The rendering pipeline: from HTML to pixels

Every time a browser displays a webpage, it follows a pipeline. Think of it like a factory assembly line — each stage builds on the previous one:

  HTML ──► DOM
                ──► Render Tree ──► Layout ──► Paint ──► Composite ──► 🖥️
  CSS  ──► CSSOM                   (Reflow)   (Repaint)   (GPU)
Enter fullscreen mode Exit fullscreen mode

Quick walkthrough:

DOM + CSSOM → Render Tree. The browser parses HTML into a DOM tree and CSS into a CSSOM. It merges them into a Render Tree — but not everything makes it in. display: none elements are excluded (they don't take up space). visibility: hidden elements stay (invisible but still occupy layout). Pseudo-elements like ::before get added even though they're not in the DOM.

Layout (Reflow). The browser walks the Render Tree and calculates where and how big every element is. It starts from the root and works down, sometimes making multiple passes when later elements affect earlier ones.

Paint (Repaint). Now it knows the geometry — time to fill in the pixels. Colors, borders, text, shadows, backgrounds.

Composite. The browser splits the page into layers, rasterizes them, and hands them to the GPU, which combines everything into the final image you see.

The first three stages run on the main thread (same thread as your JavaScript). The last one runs on a separate compositor thread. This distinction is everything for animation performance.


Reflow: the expensive one

Reflow is the browser recalculating layout. It's triggered by anything that changes an element's geometry — and it cascades. Change the width of a parent, and every child might need to be recalculated too.

What triggers it:

/* All of these invalidate layout: */
width, height, margin, padding, border,
display, position, top, left, right, bottom,
font-size, font-weight, line-height, float,
overflow, text-align, vertical-align, white-space
Enter fullscreen mode Exit fullscreen mode

But here's the part that bites people in code reviews — reading layout properties forces a synchronous Reflow:

// ❌ Layout thrashing — forces Reflow on EVERY iteration
for (let i = 0; i < cards.length; i++) {
  const height = cards[i].offsetHeight;     // READ → forces sync reflow
  cards[i].style.height = height + 10 + 'px'; // WRITE → invalidates layout
}

// ✅ Batch reads first, then writes
const heights = cards.map(c => c.offsetHeight);
heights.forEach((h, i) => {
  cards[i].style.height = h + 10 + 'px';
});
Enter fullscreen mode Exit fullscreen mode

Properties like offsetHeight, offsetWidth, getBoundingClientRect(), scrollTop, clientLeft — all of them force the browser to stop everything and recalculate layout immediately to give you an accurate number.

📎 Paul Irish maintains the definitive list: what-forces-layout.md


Repaint: cheaper, but still not free

Repaint happens when visual properties change without affecting geometry — colors, shadows, outlines, visibility.

The key relationship to memorize:

Reflow  → ALWAYS triggers Repaint (layout changed → must redraw)
Repaint → NEVER triggers Reflow (visual change only → no geometry recalc)
Enter fullscreen mode Exit fullscreen mode

Quick comparison:

/* Triggers Reflow + Repaint — removed from layout entirely */
display: none;

/* Triggers only Repaint — invisible but keeps its space */
visibility: hidden;

/* Triggers only Composite — cheapest path, GPU only */
opacity: 0;
Enter fullscreen mode Exit fullscreen mode

Three ways to hide an element, three very different performance costs.


Composite: where animations become smooth (or crash your browser)

This is the stage that makes 60fps animations possible — and the stage that can eat all your GPU memory if you're not careful.

After painting, the browser groups elements into layers and sends them to the GPU. The GPU's job is simple: take these pre-rendered images and move/transform/blend them. It's incredibly fast at this because that's literally what GPUs were built for.

This is why transform and opacity animations are smooth — they only need Composite. The GPU just shifts a cached texture around. No Layout. No Paint. No main thread involvement at all.

☝️ This demo has two balls bouncing side to side. The green one uses CSS transform (compositor thread). The red one uses JS requestAnimationFrame (main thread). Hit "Block Main Thread" and watch: the green ball keeps moving while the red one freezes for 2 seconds. That's the compositor thread in action.

The GPU is a separate computer

This mental model changed everything for me. The GPU isn't just "a fast part of your CPU." It's a separate device with its own memory. The browser has to:

  1. Rasterize each layer into a bitmap image (on CPU)
  2. Upload that bitmap to GPU memory
  3. Send instructions (position, transforms, opacity values)

It's like an AJAX request — you can't tell the server "just grab this from the DOM." You have to serialize the data and send it. Except here, instead of JSON over the network, you're sending pixel data over a memory bus. And those extra milliseconds of transfer time? They're the "flicker" you sometimes see at the start of an animation.

Memory: the silent killer

Every composite layer is stored as an uncompressed RGBA bitmap in GPU memory:

Layer memory = width × height × 4 bytes
Enter fullscreen mode Exit fullscreen mode
What you're rendering Memory
One 320×240 element 300 KB
10-slide carousel (800×600) with will-change 19 MB
Same carousel on 2× Retina 76 MB
Same carousel on 3× mobile display 172 MB

That 320×240 red rectangle? It's 104 bytes as a PNG. In GPU memory, it's 300 KB. The GPU doesn't do PNG. It stores raw pixels.

An iPhone SE has roughly 200–300 MB available for your entire page's GPU layers. One poorly optimized component can eat all of it.


Implicit compositing: the bug you didn't write

This is what crashed my checkout page. And I bet you have it in your codebase right now.

Setup: Element A sits above element B (higher z-index). You animate B with transform. The browser promotes B to its own GPU layer for the animation.

Problem: Element A must visually stay above B. But A is still on the base layer. The GPU composites layers in order — it can't interleave elements from different layers. So the browser is forced to promote A to its own layer too, even though A has nothing to do with the animation.

Before animation:
┌─────────────────────┐
│  Base layer          │
│  ┌─A─┐              │  A and B both painted on
│  │   │  ┌─B─┐       │  the same base layer
│  └───┘  │   │       │
│         └───┘       │
└─────────────────────┘

During B's transform animation:
┌─────────────────────┐
│  Base layer          │  Layer 1: base (repainted without A and B)
│                      │
└─────────────────────┘
┌─────────────────────┐
│  B layer             │  Layer 2: B (for animation)
└─────────────────────┘
┌─────────────────────┐
│  A layer             │  Layer 3: A (IMPLICIT — just to stay on top!)
└─────────────────────┘
                          GPU composites: 1 → 2 → 3
Enter fullscreen mode Exit fullscreen mode

That's implicit compositing. You didn't ask for this. The browser did it behind your back to preserve correct visual ordering. Each extra layer means:

  • Extra Repaint to create the layer texture
  • Extra memory on the GPU
  • Extra transfer time CPU → GPU

On my checkout page, I had an animated background behind pricing cards. Every card, every badge element sitting above it got implicitly promoted. 34 layers. 200+ MB.

The fix:

/* ✅ Keep animated elements on TOP of stacking context */
.animated-bg {
  position: fixed;
  z-index: -1;  /* below content — no implicit compositing */
}

/* or explicitly promote only what needs it */
.pricing-card {
  position: relative;
  z-index: auto; /* don't create new stacking context unnecessarily */
}
Enter fullscreen mode Exit fullscreen mode

Layer squashing: when the browser "helps" and makes it worse

Modern browsers try to be smart about implicit compositing. When multiple overlapping elements get promoted to separate layers, the browser may squash them into a single shared layer to save memory. This is called Layer Squashing — and most of the time, it works great.

But sometimes it backfires. Imagine 20 small badges overlapping an animated element. Instead of 20 tiny layers, the browser squashes them into one giant layer that covers the bounding box of all 20 badges combined. That single merged layer can end up consuming more memory than the 20 small ones would have.

20 small layers:  50×50×4 × 20 = 200 KB  ✅
1 squashed layer: 800×600×4    = 1.9 MB  😱
Enter fullscreen mode Exit fullscreen mode

If you spot this in DevTools (one mysteriously large layer covering a big area), you can disable squashing by giving each element a slightly different translateZ value:

/* Force separate layers — prevents squashing */
.badge:nth-child(1) { transform: translateZ(0.0001px); }
.badge:nth-child(2) { transform: translateZ(0.0002px); }
.badge:nth-child(3) { transform: translateZ(0.0003px); }
/* ... */
Enter fullscreen mode Exit fullscreen mode

The browser sees elements on different "planes" in 3D space and can't merge them. Use this trick sparingly and only when DevTools confirms squashing is the problem — in most cases the browser's default behavior is fine.


Practical optimizations

1. Only animate transform and opacity

These are the only properties guaranteed to skip Layout and Paint:

/* 🐌 CPU: Layout → Paint → Composite on EVERY frame */
@keyframes slide-bad {
  from { left: 0; }
  to   { left: 200px; }
}

/* 🚀 GPU: Composite ONLY */
@keyframes slide-good {
  from { transform: translateX(0); }
  to   { transform: translateX(200px); }
}
Enter fullscreen mode Exit fullscreen mode

Need to animate color? Fake it with a pseudo-element and opacity:

.button {
  background: #3b82f6;
  position: relative;
}
.button::after {
  content: '';
  position: absolute;
  inset: 0;
  border-radius: inherit;
  background: #1d4ed8;
  opacity: 0;
  transition: opacity 0.2s;
}
.button:hover::after {
  opacity: 1;
}
Enter fullscreen mode Exit fullscreen mode

2. transition vs @keyframes — and a common misconception

CSS gives you two ways to animate: Transitions and Animations (@keyframes). Both can run on the compositor thread — but only if you animate the right properties.

Transitions react to a state change. Point A → Point B. Simple.

/* Transition: fires when .active class is added/removed */
.card {
  transition: transform 0.3s ease;
}
.card.active {
  transform: translateY(-10px);
}
Enter fullscreen mode Exit fullscreen mode

@keyframes define a multi-step animation. Can loop, run automatically, go forward and backward.

/* Animation: starts immediately, loops forever */
.spinner {
  animation: spin 1s linear infinite;
}
@keyframes spin {
  from { transform: rotate(0deg); }
  to   { transform: rotate(360deg); }
}
Enter fullscreen mode Exit fullscreen mode

Here's the misconception I see all the time: developers think that just using @keyframes automatically means GPU-accelerated, smooth animation. It doesn't. What matters is WHICH properties you animate, not HOW you animate them.

/* ❌ @keyframes but STILL triggers Reflow every frame */
@keyframes grow-bad {
  from { width: 100px; height: 100px; }
  to   { width: 300px; height: 300px; }
}

/* ❌ Same problem — visibility triggers Repaint every frame */
@keyframes blink-bad {
  0%   { visibility: visible; }
  50%  { visibility: hidden; }
  100% { visibility: visible; }
}

/* ✅ Composite only — GPU handles this entirely */
@keyframes grow-good {
  from { transform: scale(1); }
  to   { transform: scale(3); }
}

/* ✅ Composite only — no Repaint, no Reflow */
@keyframes fade-good {
  0%   { opacity: 1; }
  50%  { opacity: 0; }
  100% { opacity: 1; }
}
Enter fullscreen mode Exit fullscreen mode

The same rule applies to transitions:

/* ❌ transition on width — Reflow on every frame */
.panel {
  width: 0;
  transition: width 0.3s ease;
}
.panel.open {
  width: 400px;
}

/* ✅ transition on transform — Composite only */
.panel {
  transform: scaleX(0);
  transform-origin: left;
  transition: transform 0.3s ease;
}
.panel.open {
  transform: scaleX(1);
}
Enter fullscreen mode Exit fullscreen mode

Both panels look like they're expanding. But the first one forces the browser to recalculate layout for the panel and everything around it, 60 times per second. The second one just tells the GPU to stretch a cached texture.

Quick rule: @keyframes and transition are just delivery mechanisms. The performance depends entirely on which CSS properties are inside them. transform and opacity → GPU fast path. Anything else → main thread, Reflow/Repaint on every frame.

3. CSS animations > JS animations

CSS animations are declarative — the browser knows start, end, and duration upfront, so it pre-calculates everything and ships it to the GPU compositor thread.

JS animations are imperative — you compute each frame yourself, 60 times per second, on the main thread. If any JS computation takes too long, your animation stutters.

// JS animation — hostage to main thread
function animate() {
  el.style.transform = `translateX(${pos}px)`;
  pos += 2;
  if (pos < 200) requestAnimationFrame(animate);
}

// If something heavy runs on main thread → animation freezes
Enter fullscreen mode Exit fullscreen mode
/* CSS animation — runs on compositor thread, immune to JS blocking */
.element {
  transition: transform 0.3s ease;
}
.element.active {
  transform: translateX(200px);
}
Enter fullscreen mode Exit fullscreen mode

The third option: Web Animations API

There's actually a middle ground between "pure CSS" and "manual rAF loop." The Web Animations API (Element.animate()) gives you JavaScript's flexibility with CSS animation's performance:

// Web Animations API — runs on compositor thread like CSS,
// but with full JS control: pause, reverse, seek, dynamic values
const card = document.querySelector('.card');

const animation = card.animate([
  { transform: 'translateY(0px)', opacity: 1 },
  { transform: 'translateY(-20px)', opacity: 0.8 },
  { transform: 'translateY(0px)', opacity: 1 }
], {
  duration: 600,
  easing: 'cubic-bezier(0.34, 1.56, 0.64, 1)', // spring-like
  iterations: 1
});

// Full control — things CSS can't do easily:
animation.pause();
animation.reverse();
animation.playbackRate = 0.5;  // slow motion
animation.currentTime = 300;   // seek to middle

animation.onfinish = () => {
  console.log('Animation complete!');
};
Enter fullscreen mode Exit fullscreen mode

Why this matters: Element.animate() is declarative under the hood — the browser receives the full animation description upfront (just like @keyframes) and can hand it to the compositor thread. But unlike CSS, you get pause, reverse, seek, dynamic playback rate, and finish callbacks — all without touching requestAnimationFrame.

The rule of thumb:

  • Simple hover/toggle effects → CSS transition
  • Looping, multi-step, auto-playing → CSS @keyframes
  • Dynamic, interactive, need JS controlElement.animate()
  • Physics-based, gesture-drivenrequestAnimationFrame (last resort)

4. will-change — and the hack it replaced (formerly translateZ(0))

Before will-change, developers used a hack to force GPU layer promotion:

/* The OG GPU hack (circa 2012–2016) */
.old-school {
  transform: translateZ(0);
  /* or */
  transform: translate3d(0, 0, 0);
}
Enter fullscreen mode Exit fullscreen mode

This worked because any 3D transform forces the element onto a separate composite layer. It was the zoom: 1 of GPU rendering — a side effect abused as a feature.

will-change replaced it as the proper, intentional API:

/* ❌ Old: hack, no intent, always-on, wastes memory */
.element { transform: translateZ(0); }

/* ✅ Modern: communicates intent to the browser */
.element { will-change: transform; }

/* ✅✅ Best: apply only when needed, remove when done */
.card:hover { will-change: transform; }
.card:active { transform: scale(0.97); }
Enter fullscreen mode Exit fullscreen mode

But will-change still creates a real composite layer with real memory cost. It's not a magic "go fast" property — it's a resource allocation.

/* ✅ Good: applied just before animation starts */
.card:hover {
  will-change: transform;
}
.card:active {
  transform: scale(0.95);
}

/* ❌ Bad: applied to everything "just in case" */
* {
  will-change: transform; /* congratulations, you've eaten all GPU memory */
}
Enter fullscreen mode Exit fullscreen mode

Every will-change: transform element gets its own layer → its own RGBA bitmap → its own chunk of GPU memory. Use it only when you know an animation is about to happen, and remove it when the animation is done.

5. Shrink your layers

The GPU stores raw pixels. Smaller element = smaller texture = less memory:

/* ❌ 100×100 layer = 40,000 bytes in GPU memory */
.large-layer {
  width: 100px;
  height: 100px;
  will-change: transform;
}

/* ✅ 10×10 layer = 400 bytes — then scale up on GPU */
.smart-layer {
  width: 10px;
  height: 10px;
  background: radial-gradient(circle, #ff6b6b, transparent);
  transform: scale(10);
  will-change: transform;
}
Enter fullscreen mode Exit fullscreen mode

Both look identical. The second uses 100× less GPU memory.

This trick is especially powerful for decorative elements like glows, gradients, and blurred backgrounds. For images, even a 5–10% reduction in source size (compensated with scale) can meaningfully reduce memory on high-DPI screens.

6. Batch DOM reads and writes

Never interleave reads and writes:

// ❌ Forces Reflow on every iteration (layout thrashing)
items.forEach(item => {
  item.style.width = item.offsetWidth + 10 + 'px';
});

// ✅ All reads first, then all writes
const widths = items.map(item => item.offsetWidth);
items.forEach((item, i) => {
  item.style.width = widths[i] + 10 + 'px';
});

// ✅ Or batch changes via class toggle — one Reflow total
items.forEach(item => item.classList.add('expanded'));
Enter fullscreen mode Exit fullscreen mode

Beyond the basics: modern CSS performance tools

The rendering pipeline hasn't changed, but browsers have added new CSS properties that give you explicit control over what gets rendered and when.

contain — opt out of cascading costs

By default, changing one element can force the browser to re-layout and re-paint large portions of the page. The contain property tells the browser: "this element's internals won't affect anything outside it."

.pricing-card {
  contain: layout paint;
  /* 
    layout — element's internals can't affect outside layout
    paint  — element's content won't be painted outside its box

    Now Reflow inside .pricing-card won't cascade 
    to siblings or parents.
  */
}

/* The strict shorthand: */
.widget {
  contain: strict;
  /* = contain: size layout paint style */
  /* Most aggressive — but requires explicit width/height */
}
Enter fullscreen mode Exit fullscreen mode

This is especially powerful for repeated components like cards in a grid, list items, or dashboard widgets. If one card's content changes, only that card gets re-layouted.

content-visibility: auto — skip rendering off-screen content

This is one of the biggest rendering performance wins available today:

.section {
  content-visibility: auto;
  contain-intrinsic-size: auto 500px; /* estimated height for scrollbar */
}
Enter fullscreen mode Exit fullscreen mode

content-visibility: auto tells the browser: "don't bother laying out, painting, or compositing this element until it's near the viewport." For long pages, this can skip rendering of entire sections until the user scrolls to them.

Real-world impact: Chrome's own testing showed rendering cost reductions of up to 7× on long pages — from 232ms down to 30ms rendering time.

How they fit into the pipeline

                                              What it skips:

  Layout ──► Paint ──► Composite

  contain: layout paint     ──► Limits Reflow/Repaint scope (doesn't cascade)
  content-visibility: auto  ──► Skips ALL stages for off-screen elements
  will-change: transform    ──► Pre-promotes to Composite layer
  transform / opacity       ──► Skips Layout and Paint entirely
Enter fullscreen mode Exit fullscreen mode

Why this matters for Core Web Vitals

Everything in this article directly impacts Interaction to Next Paint (INP) — the Core Web Vital that measures how fast your page responds to user input.

Here's the connection: when a user clicks a button, the browser needs to run your JS handler, recalculate layout, repaint, and composite — all before the next frame. If your click handler triggers a massive Reflow (because of layout thrashing, deep DOM, or animating width instead of transform), the time from click to visual response spikes. That's a bad INP score.

Every optimization in this article — batching reads/writes, using contain to limit Reflow scope, keeping animations on the compositor thread — directly reduces INP by keeping the main thread free to process interactions quickly.


Debugging: see it with your own eyes

Theory is great. But you need to see what your browser is actually doing.

Remember the checkout page from the intro? Here's exactly how I found the problem:

  1. Opened Web Inspector → Elements → Layers tab.
  2. Saw 34 composite layers where I expected 3–4. Total memory: over 200 MB.
  3. Clicked on a pricing card layer. Compositing reason: "has a composited descendant with a lower z-index." — implicit compositing.
  4. Traced it back: an animated gradient background (@keyframes with transform) was sitting below all the pricing cards in z-index.
  5. Every card above it got silently promoted to its own layer. Each card ~600×400 on 3× display = ~2.7 MB per card. Multiply by 12 cards, plus badges, plus decorative elements — 200 MB.
  6. The fix: moved the animated background to z-index: -1 and removed two stale will-change: transform declarations from cards that no longer animated. 34 layers → 2. 200 MB → ~8 MB.

Three lines of CSS. Two minutes of work. But only because I knew where to look.

Chrome DevTools — Layers panel

Open DevTools first (F12 or Cmd+Option+I on Mac / Ctrl+Shift+I on PC), then Cmd+Shift+P (Mac) or Ctrl+Shift+P (PC) → type "Show Layers" → Enter.

This shows you every composite layer on the page: its size, memory consumption, and — most importantly — why it was created. If you see "compositing reason: has a composited descendant with a higher z-index" — that's implicit compositing.

Chrome DevTools — Rendering panel

With DevTools open, Cmd+Shift+P (Mac) or Ctrl+Shift+P (PC) → type "Show Rendering" → Enable:

  • Paint flashing — green rectangles flash wherever Repaint happens. If your "GPU animation" shows green flashes every frame, it's not running on GPU.
  • Layer borders — orange outlines show composite layers. If you see dozens of orange rectangles where you expected a few, you have implicit compositing.
  • Scrolling performance issues — highlights elements that slow down scrolling.

Chrome DevTools — Performance panel

Record a session → look at the flame chart:

  • Purple bars (Layout) in the "Main" section of the flame chart = you're triggering Reflow. Hover over them to see which element and which JS call caused it.
  • Green bars (Paint) on every frame = you're triggering Repaint. Click on a paint event to see the affected area.
  • Nothing but Composite = you're on the GPU fast path ✅
  • If you see "Forced reflow is a likely performance bottleneck" in yellow — that's layout thrashing. Click the warning to jump straight to the offending code.

Web Inspector — Layers tab

Elements panel → right sidebar → Layers. Shows composite layers with memory usage and compositing reasons. Often more readable than Chrome's panel for quick audits.


The cheat sheet

Cheapest ◄──────────────────────────────────────────► Most expensive

Composite only        Repaint only          Reflow + Repaint
(GPU, compositor      (CPU, main thread)    (CPU, main thread,
 thread)                                     cascades to children)

transform             color                 width / height
opacity               background            margin / padding
                      box-shadow            font-size / line-height
                      visibility            display / position
                      outline               top / left / right / bottom
                                            border / float / overflow
Enter fullscreen mode Exit fullscreen mode

The rules I follow:

  1. Animate only transform and opacity@keyframes and transition are just delivery mechanisms; the property inside is what matters
  2. Pick the right animation tool: transition for A→B, @keyframes for loops/multi-step, Element.animate() for JS-controlled, rAF as last resort
  3. Keep animated elements high in z-index — implicit compositing is the #1 hidden cost
  4. Never interleave DOM reads and writes — batch them to avoid layout thrashing
  5. Use will-change as a scalpel, not a sledgehammer — apply before animation, remove after
  6. Watch for Layer Squashing — if DevTools shows one giant merged layer, use unique translateZ values
  7. Use contain on repeated components — cards, list items, widgets
  8. Use content-visibility: auto on long scrollable content
  9. Audit with Layers panel after every feature — especially on mobile
  10. Keep DOM flat — deep nesting multiplies Reflow cost
  11. Test on real mid-range devices — your MacBook Pro lies to you about INP, memory, and GPU

The browser rendering pipeline hasn't fundamentally changed in a decade. But the tools to control it have gotten dramatically better. The developers who understand these internals don't just write faster animations — they make better architectural decisions about how components are structured, how DOM is organized, and where performance budgets should go.

Understanding this stuff turned a 2-second freeze on my checkout page into a fix that took 3 lines of CSS. I hope it saves you a similar debugging session.

What rendering performance gotchas have you run into? I'd love to hear your war stories in the comments 👇


If you found this useful, I write about frontend architecture and performance regularly — follow for more deep dives.

Top comments (0)