DEV Community

Cover image for HTML-in-Canvas Feels Fake Until You Try to Build With It
Sara Loera
Sara Loera

Posted on

HTML-in-Canvas Feels Fake Until You Try to Build With It

HTML-in-Canvas feels fake the first time it works.

Real DOM.

Real CSS.

Real layout.

Inside canvas.

You write a tiny canvas.onpaint, call drawElementImage(), and the browser just… does it. Your styled HTML is suddenly part of a canvas frame.

Beautiful.

Then you try to build an actual app.

Now you have multiple surfaces, resize logic, export timing, React unmounts, cleanup, invalidation, and the classic:

wait, is this number CSS pixels or canvas pixels?

That's the part where the cool browser primitive turns into lifecycle work.

So I built Prism.


First: what is HTML-in-Canvas?

The WICG HTML-in-Canvas proposal lets a canvas render real HTML elements directly.

Not screenshots.

Not html2canvas.

Not SVG foreignObject.

Actual DOM, painted into a canvas frame by the browser.

A tiny example looks like this:

<canvas id="canvas" layoutsubtree>
  <div id="panel" style="width: 400px; height: 200px">
    <h2>Real HTML</h2>
    <p>Real CSS. Real fonts. Real layout.</p>
  </div>
</canvas>
Enter fullscreen mode Exit fullscreen mode
const canvas = document.getElementById("canvas");
const panel = document.getElementById("panel");
const ctx = canvas.getContext("2d");
if (!ctx) throw new Error("2d context unavailable");

canvas.onpaint = () => {
  ctx.drawElementImage(panel, 0, 0);
};

canvas.requestPaint();
Enter fullscreen mode Exit fullscreen mode

The rough shape is:

  • layoutsubtree opts canvas children into layout.
  • drawElementImage() draws a child element into the canvas.
  • onpaint fires when the browser says the canvas subtree needs painting.

For one surface, it is almost suspiciously nice.

But apps are never one surface.


The part that gets messy

With the raw API, your app has to answer a bunch of boring-but-important questions:

  • Which DOM elements are canvas surfaces?
  • Who updates their bounds?
  • Who asks the browser for a new paint?
  • What happens when a component unmounts?
  • What happens when the runtime is destroyed?
  • How do you wait for a paint-ready frame before export?
  • How do you keep CSS pixels and backing-store pixels straight?

A single React component can already start to look like this:

useEffect(() => {
  const canvas = canvasRef.current;
  const scene = sceneRef.current; // the DOM element to draw
  const ctx = canvas?.getContext("2d");
  if (!canvas || !ctx || !scene) return;

  canvas.onpaint = () => {
    ctx.reset();
    ctx.drawElementImage(scene, 0, 0);
  };

  // Simplified: uses CSS pixels for backing-store size.
  // In production, use DPR-aware sizing via devicePixelContentBoxSize
  // or multiply contentRect dimensions by devicePixelRatio.
  const resizeObserver = new ResizeObserver(([entry]) => {
    canvas.width = Math.round(entry.contentRect.width);
    canvas.height = Math.round(entry.contentRect.height);
    canvas.requestPaint();
  });

  resizeObserver.observe(canvas);
  canvas.requestPaint();

  return () => {
    resizeObserver.disconnect();
    canvas.onpaint = null;
  };
}, []);
Enter fullscreen mode Exit fullscreen mode

That is not terrible.

It is also not the app I wanted to write.

Now add export. Add multiple DOM surfaces. Add transform sync. Add pointer interaction. Add route changes. Add a framework. Add "why is this blank PNG happening only sometimes?"

That is where Prism comes in.


What Prism is

Prism is a native-first HTML-in-Canvas runtime for managed DOM surfaces in canvas applications.

It does not replace your renderer.

Your app still owns the scene, drawing model, animation loop, state, data, interactions, and visual decisions.

Prism owns the DOM-surface lifecycle:

your app owns:
  scene, rendering, animation, state, interaction

Prism owns:
  surface registration, bounds, invalidation,
  paint readiness, coordinate helpers, cleanup
Enter fullscreen mode Exit fullscreen mode

Install it:

pnpm add @synthesisengineering/prism
Enter fullscreen mode Exit fullscreen mode

Use it like this:

import { CanvasRuntime } from "@synthesisengineering/prism";

const runtime = new CanvasRuntime(canvas, { backend: "auto" });

const surface = runtime.registerSurface(element, {
  bounds: { x: 0, y: 0, width: 320, height: 180 }
});

runtime.onPaint(({ drawSurface }) => {
  drawSurface(surface);
});

runtime.start();
Enter fullscreen mode Exit fullscreen mode

That is the core idea.

The DOM stays DOM.

The canvas stays canvas.

Prism manages the boundary.


The shift: DOM as source material

The important part is not that Prism makes prettier pixels than hand-written canvas.

You can make beautiful things with Canvas 2D, SVG, WebGL, shaders, and all the usual tricks.

The important part is that the source material can stay DOM.

Your labels can be real HTML.

Your typography can stay CSS.

Your icons can stay SVG.

Your React components can stay React components.

Prism lets the canvas treat those DOM-authored pieces as managed surfaces.

So instead of rewriting every styled element as canvas drawing code, you can author the visual source with the browser's layout engine and compose it inside canvas.

The browser gives us the primitive.

Prism provides an app lifecycle.


Use case 1: data visualization

One example is Prism Atlantic.

Prism Atlantic App

It uses real NOAA/NHC HURDAT2 Atlantic storm-track data from 2000–2025. The canvas draws the storm paths. Prism manages the HTML/CSS surfaces: title, overview, legend, tooltip, detail panel, caption, and export button.

Open Prism Atlantic →

The app owns the data visualization. Prism owns the surface lifecycle.

import { CanvasRuntime } from "@synthesisengineering/prism";

const runtime = new CanvasRuntime(canvas, { backend: "auto" });

const tooltip = runtime.registerSurface(tooltipEl, {
  bounds: { x: 0, y: 0, width: 280, height: 120 }
});

const legend = runtime.registerSurface(legendEl, {
  bounds: { x: 20, y: 20, width: 200, height: 400 }
});

runtime.onPaint(({ ctx, drawSurface }) => {
  ctx.save();
  ctx.scale(runtime.pixelRatio, runtime.pixelRatio);
  drawStormTracks(ctx);
  ctx.restore();
  drawSurface(tooltip);
  drawSurface(legend);
});

runtime.start();
Enter fullscreen mode Exit fullscreen mode

The export path is the part I really wanted to get right.

With Prism, you wait for fonts, then wait for one Prism-owned paint pass, then use the normal canvas API:

await document.fonts.ready;
await runtime.paintOnce();

const blob = await new Promise<Blob>((resolve, reject) => {
  canvas.toBlob((value) => {
    if (!value) {
      reject(new Error("Canvas export failed."));
      return;
    }
    resolve(value);
  }, "image/png");
});
Enter fullscreen mode Exit fullscreen mode

paintOnce() does not export anything by itself.

It just answers: has Prism completed a paint-ready frame?

Then canvas.toBlob() does the export.

No screenshot library. No html2canvas. No foreignObject export path.


Use case 2: React components as canvas surfaces

Another example is React Composer Lite.

Prism React Composer Lite

It shows React-authored HTML/CSS components as movable, transformable, exportable canvas surfaces.

The trick is not to let React and Prism fight over ownership.

React owns component state.

Prism owns surface registration and cleanup.

The pattern is:

  1. Create the runtime once.
  2. Register the DOM node once for a runtime/element pair.
  3. Update bounds through surface.setBounds().
  4. Destroy and dispose on unmount.
import { useEffect, useRef, useState } from "react";
import type { RefObject } from "react";
import { CanvasRuntime } from "@synthesisengineering/prism";
import type { CanvasSurface } from "@synthesisengineering/prism";

type SurfaceBounds = {
  x: number;
  y: number;
  width: number;
  height: number;
};

export function usePrismRuntime(
  canvas: HTMLCanvasElement | null
): CanvasRuntime | null {
  const [runtime, setRuntime] = useState<CanvasRuntime | null>(null);

  useEffect(() => {
    if (!canvas) {
      setRuntime(null);
      return;
    }

    const nextRuntime = new CanvasRuntime(canvas, { backend: "auto" });
    nextRuntime.start();
    setRuntime(nextRuntime);

    return () => {
      setRuntime(null);
      nextRuntime.destroy();
    };
  }, [canvas]);

  return runtime;
}

export function usePrismSurface(
  runtime: CanvasRuntime | null,
  elementRef: RefObject<HTMLElement | null>,
  bounds: SurfaceBounds
) {
  const surfaceRef = useRef<CanvasSurface | null>(null);

  useEffect(() => {
    const element = elementRef.current;
    if (!runtime || !element) return;

    const surface = runtime.registerSurface(element, { bounds });
    surfaceRef.current = surface;

    return () => {
      surface.dispose();
      surfaceRef.current = null;
    };
    // Register once for this runtime/element pair.
    // Bounds updates are handled by the effect below.
  }, [runtime, elementRef]);

  useEffect(() => {
    surfaceRef.current?.setBounds(bounds);
  }, [bounds.x, bounds.y, bounds.width, bounds.height]);

  return surfaceRef;
}
Enter fullscreen mode Exit fullscreen mode

The important detail: usePrismRuntime returns state, not a ref, so downstream usePrismSurface re-runs correctly when the runtime is ready. Bounds updates do not require re-registering the surface — register once, then update through surface.setBounds().

The boundary is clean:

  • React renders normal components.
  • Prism registers those DOM nodes as surfaces on mount.
  • Canvas composes them into a frame.
  • Cleanup happens when components unmount.

Your React components do not need to become canvas drawing code.

They can stay React components.


Use case 3: DOM as creative material

The third example is Prism Atelier.

Prism Prism Atelier

This one is less practical and more fun.

It uses DOM-authored HTML/CSS/SVG as visual material. A real DOM surface is registered once, then drawn repeatedly inside the canvas paint pass with transforms, opacity, shadows, blend modes, and pointer-driven motion.

Open Prism Atelier →

The source can be a normal element:

<div id="type-surface" class="type-surface">PRISM</div>
Enter fullscreen mode Exit fullscreen mode

Then Prism turns it into a reusable canvas surface:

const surface = runtime.registerSurface(typeEl, {
  bounds: { x: -380, y: -105, width: 760, height: 210 }
});
Enter fullscreen mode Exit fullscreen mode

And the app can compose it:

runtime.onPaint(({ ctx, drawSurface }) => {
  // runtime.canvas.width/height are backing-store pixels (CSS pixels × devicePixelRatio).
  // Surface bounds passed to registerSurface/setBounds are CSS pixels — keep them separate.
  ctx.fillStyle = "#07070a";
  ctx.fillRect(0, 0, runtime.canvas.width, runtime.canvas.height);

  const count = 35;
  const radius = 220 * runtime.pixelRatio;
  const cx = runtime.canvas.width / 2;
  const cy = runtime.canvas.height / 2;

  for (let i = 0; i < count; i += 1) {
    const t = i / count;
    const angle = t * Math.PI * 2 + rotation;

    ctx.save();
    ctx.translate(
      cx + Math.cos(angle) * radius,
      cy + Math.sin(angle) * radius
    );
    ctx.rotate(angle);
    ctx.globalAlpha = 1 - t * 0.65;
    drawSurface(surface);
    ctx.restore();
  }
});
Enter fullscreen mode Exit fullscreen mode

You could hand-roll something like this with the raw API.

But then you own the browser paint hooks, bounds, invalidation, readiness, and cleanup yourself.

Prism makes the DOM surface reusable as canvas material without making your app coordinate raw onpaint, requestPaint(), and drawElementImage() directly.


The coordinate-space footgun

One thing Prism makes explicit is coordinate space.

Surface bounds are CSS pixels:

surface.setBounds({
  x: 24,
  y: 32,
  width: 360,
  height: 220
});
Enter fullscreen mode Exit fullscreen mode

Direct canvas drawing uses backing-store pixels.

So Prism exposes helpers:

const point = runtime.cssPointToCanvasPixels({ x: 24, y: 32 });
const size = runtime.cssLengthToCanvasPixels(12);
Enter fullscreen mode Exit fullscreen mode

This sounds boring until it saves you from the classic "everything is offset and blurry on my display" bug.

Boring runtime helpers are good, actually.


An agent skill is included

One thing I care about: Prism should be hard to misuse, even when an AI coding agent is writing the first draft.

So Prism ships with an agent skill:

npx skills add synthesiseng/prism --skill prism-runtime
Enter fullscreen mode Exit fullscreen mode

The skill teaches agents the Prism runtime contract:

  • import from @synthesisengineering/prism
  • register DOM nodes as surfaces
  • draw surfaces inside onPaint()
  • wait for document.fonts.ready and runtime.paintOnce() before export
  • avoid html2canvas, dom-to-image, raw drawElementImage(), and deep imports

That last part matters.

Without guidance, agents tend to reach for screenshot libraries or raw platform APIs. The skill keeps them on the Prism path.


The renderer boundary

Prism can sit alongside renderers like Three.js because it does not try to own the scene. Today, the documented API remains 2D-first; renderer-specific integrations are future-facing.

That boundary is the design:

  • the renderer owns the scene
  • the app owns state and interaction
  • Prism owns DOM-surface lifecycle

The point is not to turn Prism into a renderer.

The point is to let renderers use real DOM surfaces without every app rebuilding the same lifecycle layer.


The honest caveats

Prism is still early.

Native fidelity currently requires Chromium with:

chrome://flags/#canvas-draw-element
Enter fullscreen mode Exit fullscreen mode

Prism detects native support and can fall back to a compatibility path, but fallback is lower fidelity. It is not equivalent to native HTML rendering.

This is alpha software: 0.1.0-alpha.8.

The current public API is 2D-first. WebGL, WebGPU, Three.js, Pixi, and Phaser integrations are future-facing, not the current public API center.

Prism is not a renderer, UI kit, design tool, app framework, charting library, or game engine.

It is a runtime for managed DOM surfaces in canvas applications.

That is the whole point.


Try it

Docs and examples: runprism.dev

Source: github.com/synthesiseng/prism

Live examples:

Install:

pnpm add @synthesisengineering/prism
Enter fullscreen mode Exit fullscreen mode

Agent skill:

npx skills add synthesiseng/prism --skill prism-runtime
Enter fullscreen mode Exit fullscreen mode

Feedback welcome from people building canvas-heavy apps, visual tools, data viz, editors, and creative systems.

Top comments (1)

Collapse
 
mamoor_ahmad profile image
Mamoor Ahmad

Good Effort
👍👍👍