⚠️ Heads up before we start: Everything in this post is very experimental. HTML-in-Canvas is a proposal sitting behind a Chrome Canary flag (
chrome://flags/#canvas-draw-element).
My background is more related to frontend work and I've spent more time than I'd like to admit wrestling with <canvas> specifically trying to build rich interactive experiences that look great, behave accessibly, and don't feel like I was fighting the browser the entire time.
If you've built anything intense with canvas, interactive data dashboards, creative tools, game UIs, WebGL-heavy experiences you probably know the feeling. You get beautiful, performant rendering, but you trade away a lot: accessibility, interactive text, real form elements. The moment you step inside a <canvas>, the browser's layout engine waves goodbye and you're on your own.
So when a colleague shared the HTML-in-Canvas proposal from WICG with me, my first reaction was: wait, is this real?
The Problem Canvas Has Always Had
Canvas is great at pixel-perfect rendering, WebGL shaders, games, creative tools but it has a fundamental tension with the rest of the web platform.
When you render text in canvas, you lose:
- Screen reader support
- Native text selection
- Proper i18n and font rendering (bidirectional text, ligatures...)
- The browser's built-in hit testing
Chart libraries have been working around this for years, they either maintain a hidden DOM that mirrors the canvas (complex, fragile) or they just... give up on accessibility and... neither is great.
The same problem shows up in creative tools, in-game UIs, data visualization dashboards, video export features — any time you need the power of canvas but also want your content to behave like HTML.
What HTML-in-Canvas Proposes
The idea is surprisingly straightforward: let HTML elements live inside a <canvas> and get rendered into it, while still participating in the browser's layout, hit testing, and accessibility tree.
Three new primitives make this work:
1. The layoutsubtree attribute
You add this to a <canvas> element. It tells the browser: "treat my children as real layout participants." They go through normal CSS layout, they're in the accessibility tree, they can receive focus — but they're not painted directly to the screen. Their rendering is invisible until you explicitly draw them.
<canvas id="canvas" layoutsubtree>
<form id="my-form">
<label for="name">Name:</label>
<input id="name" type="text">
</form>
</canvas>
2. drawElementImage()
This is the key method. It takes a child element and draws it into the canvas context at the coordinates you specify. The browser handles the rendering fonts, borders, shadows, everything — and gives you back a CSS transform to keep the element's DOM position in sync with where it's actually drawn.
const ctx = document.getElementById('canvas').getContext('2d');
canvas.onpaint = () => {
ctx.reset();
const transform = ctx.drawElementImage(my_form, 100, 50);
my_form.style.transform = transform.toString();
};
That transform return value is doing something important: it keeps the accessibility tree and hit testing aligned with what the user actually sees on screen.
3. The paint event
Instead of polling with requestAnimationFrame, you get a paint event that fires when any canvas child's rendering changes. You can also call requestPaint() to force it — similar to how rAF works.
canvas.onpaint = (event) => {
// event.changedElements tells you exactly what changed
event.changedElements.forEach(el => {
ctx.drawElementImage(el, getX(el), getY(el));
});
};
And for OffscreenCanvas in workers, there's captureElementImage() — which lets you snapshot an element and transfer it to a worker thread for rendering. That's a big deal for performance-heavy canvas work.
Why This Matters for Rich Interactive Experiences
Think about the kind of experiences that live in canvas today: game UIs, creative tools, interactive data dashboards, 3D scenes with 2D overlays, immersive web experiences with custom controls. They all share the same pain: the moment you need real UI a tooltip, a form, a menu, styled labels you either fake it with canvas drawing primitives or you layer DOM elements on top and pray the positioning stays in sync.
With HTML-in-Canvas, you could just... write HTML for that content. Real HTML, with real CSS. Drop it inside your canvas, call drawElementImage() in the paint event, and let the layout engine do the hard work.
Imagine building a WebGL scene where the HUD is actual HTML keyboard-navigable, screen reader accessible, styled with CSS. Or an interactive dashboard where the chart controls are real <input> elements rendered directly into the canvas surface, with proper focus management and no z-index hacks.
Multi-line text? CSS handles it. RTL content? The browser handles it. Accessibility for your in-canvas UI? It's already in the DOM for free.
Chart libraries also benefit here: axes, legends, and tooltips are all text-heavy and layout-sensitive. Today they reimplement font measurement and text wrapping from scratch. With this API, they could delegate that entirely to the browser. But honestly, the more exciting territory is the interactive, immersive stuff the experiences that currently feel like you're building against the platform instead of with it.
An Idea: What If Someone Built a Wrapper?
Here's where I want to be clear: this is speculation, not an announcement but I've been thinking about it and OGL is a fantastic example of what a well-designed thin wrapper looks like. It doesn't try to be Three.js it just makes WebGL less painful without hiding what's happening underneath. You still think in WebGL concepts; it just removes the boilerplate.
Something similar for HTML-in-Canvas could be interesting. Imagine a small library maybe 5-10KB that handles:
- The
layoutsubtreesetup and resize observer wiring - A declarative way to bind elements to canvas positions
- The transform synchronization so hit testing always works
- A simple reactive
paintloop
Usage might look something like this (pure speculation):
import { CanvasScene } from 'html-canvas'; // hypothetical
const scene = new CanvasScene('#my-canvas');
scene.add('#chart-legend', { x: 20, y: 20 });
scene.add('#chart-tooltip', { x: 'dynamic', y: 'dynamic' });
scene.onpaint(({ draw }) => {
draw('#chart-legend');
draw('#chart-tooltip', getTooltipPosition());
});
The API surface would stay small. No magic, no virtual DOM, no framework opinion. Just a thin layer that handles the sync complexity so you can focus on what you're building.
It could be a genuinely useful side project once the API stabilizes. For now it's just an interesting thought experiment — but the kind that's worth keeping in the back of your mind.
Should You Try It?
If you're curious: yes, absolutely. Enable the flag in Chrome Canary, clone the WICG repo, and play with the demos. There's a pie chart example, a WebGL cube with HTML on its surface, and an interactive form drawn into canvas. They're genuinely cool.
But calibrate your expectations:
- This is a proposal, currently in a dev trial in Chrome Canary only
- The API will change — possibly significantly
- No other browser has signaled implementation yet
- There are real constraints (no cross-origin content, no system colors, etc.)
The right move today is to watch it, experiment in throwaway projects, and maybe file some issues if you hit interesting edge cases, I'm pretty sure the team is actively looking for feedback.
Have you hit the canvas + accessibility wall before? I'd love to hear how you've worked around it drop it in the comments.
Top comments (0)