DEV Community

Shivam Ranjan Pandey
Shivam Ranjan Pandey

Posted on

How We Made Our Canvas Application 30x Faster: A Deep Dive into Performance Engineering

Building enterprise-grade software that handles 10,000+ elements without breaking a sweat


The Problem We Faced

Picture this: You're building a complex canvas-based design tool. Your users need to create intricate screens with thousands of elements: shapes, buttons, connectors, widgets, and real-time data displays.

Everything seemed fine during development. Then one day, a tester dragged a group of 50 elements across the screen.

The browser crashed. Out of memory.

That was our wake-up call. We dug into the Chrome DevTools memory profiler and discovered something alarming: every drag operation was creating new objects that never got garbage collected. After just a few minutes of normal use, memory usage had ballooned from 100MB to over 2GB.

But the memory leak was just the tip of the iceberg. As we investigated further, we found performance issues everywhere:

  • Dragging grouped elements caused the UI to freeze
  • Switching between screens felt like loading a new application
  • Undo/redo was duplicating entire state trees
  • Real-time data updates were blocking the main thread

Sound familiar? If you've ever built a complex canvas application with React, you've probably hit these walls too.

Here's how we fixed it.


Understanding the Root Causes

Before diving into solutions, we needed to understand why things were slow. After profiling our application, we identified several performance killers:

1. The "Check Everything" Problem

Every frame, we were checking all 10,000 elements to see which ones were visible. That's O(n) complexity—linear time that grows with every element added.

2. The "Update Everything" Problem

When one element changed, React was re-rendering components that had nothing to do with that change. Our state management was too coarse-grained.

3. The "Remember Everything" Problem

We were keeping all screens loaded in memory, even ones the user hadn't looked at in hours. And our undo history was growing without bounds.

4. The "Clone Everything" Problem

Every state update was deep-cloning entire data structures, even when we only changed one property on one element.

5. The "Search Everything" Problem

Finding which elements belonged to a group, or which data binding belonged to which component, required scanning through entire arrays every single time.


The Solutions: 15 Optimizations That Changed Everything

🌳 Spatial Indexing with R-Trees

The Problem: With 10,000 elements, checking which ones are visible = scanning all 10,000 elements. Every. Single. Frame.

The Solution: We implemented an R-Tree—a spatial data structure that organizes elements by their position on the canvas. Think of it like a library catalog system, but for 2D space.

Instead of asking "Is element #4,532 visible?" 10,000 times, we now ask "What's in this rectangular area?" once. The R-Tree answers in O(log n) time.

The Result: Viewport queries went from ~500ms to ~15ms. That's a 33x improvement.

Before: Check 10,000 elements → 500ms
After:  Query R-Tree → 15ms
Enter fullscreen mode Exit fullscreen mode

🔍 Flat Indexing for O(1) Lookups

The Problem: Finding all elements in a group meant scanning the entire element array. Finding a data binding in our component tree meant traversing the whole tree.

The Solution: We built flat indexes—simple JavaScript Maps that give us instant lookups:

  • groupId → Set of element IDs
  • elementId → groupId
  • tagId → exact location in tree

The Result: Group operations went from O(n) array scans to O(1) hash lookups. Moving a group of 50 elements is now instant, not sluggish.


💾 Smart Memory Management with Lazy Loading

The Problem: Loading a project with 10 screens × 1,000 elements = 10,000 elements in memory. Most of them sitting there unused.

The Solution: We built a lazy loading architecture with "cold storage" capabilities:

  • Active screen: Full data in memory (Zustand store)
  • Inactive screens: Just metadata (element count, last modified)
  • Cold storage: Full data persisted to browser storage, loaded on-demand

We've built the infrastructure using IndexedDB—a browser-native database API that can store hundreds of MBs (unlike localStorage's 5MB limit). This enables offline persistence and fast screen switching for large projects.

The Result: ~90% memory reduction potential. Screens load in ~50ms when needed.


📦 Transaction Batching

The Problem: This was the root cause of our memory leak. Dragging a group of 50 elements across the screen = 50 state updates per frame = 50 history snapshots per frame. At 60fps, that's 3,000 state copies per second. Each copy was a deep clone of the entire element array. No wonder we ran out of memory.

The Solution: We implemented a two-phase commit pattern:

  1. During drag: Updates happen in local state only (no store mutations)
  2. On drag end: Single batched update to the store = single history entry

The Result:

  • 50 elements × 60fps drag = 1 history entry (vs 3,000 per second)
  • Memory leak: Completely eliminated
  • Update time: ~4ms batched vs ~200ms individual (50x faster)

🎯 Selective Zustand Subscriptions

The Problem: A component subscribing to the store re-renders on ANY state change, even unrelated ones.

The Solution: Granular selectors that subscribe to exactly what each component needs:

// ❌ Bad: Re-renders when ANYTHING changes
const store = useStore();

// ✅ Good: Re-renders only when activeTool changes
const activeTool = useStore((state) => state.activeTool);

// ✅ Better: Pre-built optimized hooks
const activeTool = useActiveTool();
const { canUndo, canRedo } = useHistoryState();
Enter fullscreen mode Exit fullscreen mode

The Result: Components that used to re-render 100x per second now re-render only when their specific data changes.


🖼️ Viewport Culling

The Problem: Rendering 10,000 elements when only 100 are visible wastes GPU cycles.

The Solution: Using our spatial index, we now render only what's visible (plus a small buffer for smooth scrolling).

The Result:

  • Culling ratio: ~90% for large canvases
  • Query time: <1ms
  • GPU load: Dramatically reduced

⏱️ History Size Limits

The Problem: Unlimited undo history = memory growing forever. After a long editing session, the browser would crash.

The Solution: Circular buffer pattern—we keep the last 50 history states and discard older ones.

The Result: Memory usage is now bounded. No more crashes after long sessions.


🔄 Structural Sharing

The Problem: Every state update was doing JSON.parse(JSON.stringify(tree))—deep cloning the entire project tree even to change one property.

The Solution: Immutable updates with structural sharing. We only clone the nodes on the path to what changed:

// Before: Clone EVERYTHING
const newTree = JSON.parse(JSON.stringify(tree)); // O(n)

// After: Clone only the path
// If we're updating node at depth 3, we clone 3 nodes, not 10,000
Enter fullscreen mode Exit fullscreen mode

The Result:

  • Clone time: O(depth) vs O(n)
  • Memory: ~95% reduction for deep trees

🏷️ Real-Time Data Updates

The Problem: Our application receives hundreds of data value updates per second from external sources. Each update was traversing the entire component tree to find the binding.

The Solution:

  1. Flat data index: Build once when project loads, O(1) lookups forever
  2. Batched updates: Collect updates for 100ms, apply all at once
  3. Separate cache: Data values stored separately from tree structure

The Result: Data updates went from blocking the UI to being imperceptible.


The Numbers Don't Lie

Operation Before After Improvement
Selection (10k elements) ~500ms ~15ms 33x faster
Screen switch ~2000ms ~50ms 40x faster
Group drag (50 elements) ~100ms ~4ms 25x faster
Memory (10 screens) ~400MB ~40MB 90% reduction
Data update latency ~50ms <1ms 50x faster

Key Takeaways

1. Measure First, Optimize Second

We didn't guess where the problems were. We profiled, identified bottlenecks, and targeted our efforts.

2. Data Structures Matter

The right data structure (R-Tree, flat indexes, Maps) can turn O(n) operations into O(1) or O(log n). This is Computer Science 101, but it's easy to forget when you're deep in React components.

3. Batch Everything

Whether it's state updates, history entries, or network requests—batching almost always wins.

4. Be Lazy (In a Good Way)

Don't load data until you need it. Don't render elements until they're visible. Don't clone objects until they change.

5. Granularity is Your Friend

Coarse-grained state management is easy to write but hard to scale. Fine-grained subscriptions take more thought upfront but pay dividends at scale.


Wrapping Up

Building performant canvas applications at scale is challenging, but it's not magic. It's about understanding where time and memory are being spent, and applying well-known computer science principles to solve those specific problems.

The techniques we've shared here are universal. If you're building any complex canvas application—a design tool, a diagramming app, a game level editor, or a dashboard builder—these same patterns will help you scale.

The best performance optimization is the one that makes your users forget they're using software at all.


#React #Canvas #Engineering #Optimization #JavaScript #TypeScript

Top comments (0)