Below is a 3D fractal called "Mandelbulb" being rendered in real-time by the GPU. You can drag to rotate it.
Try right-clicking and saving the image at an angle you like. It just works.
When you think about it, this is pretty remarkable. Every frame, the shader is running calculations on potentially hundreds of thousands to millions of pixels on the GPU, and with a single right-click, it becomes a PNG file. Why can GPU-rendered content be saved as a regular PNG?
I got curious about this, so I did some research on browser graphics technology and its evolution — from the birth of the Canvas element to WebGL and JavaScript image export APIs. There may be some inaccuracies, so please take this as a learning reference.
Before Canvas: The Plugin Era
Before Canvas, displaying rich graphics in browsers required plugins.
Java applets, introduced in 1995, were widely used for interactive content in browsers. The early beta version of Minecraft apparently ran as a Java applet. Adobe Flash, introduced in 1996, dominated games, animations, and video playback.
However, both required plugin installation and had numerous security issues. Canvas would eventually become the plugin-free replacement for these technologies.
The Birth of the Canvas Element (2004)
The Canvas element was first committed to WebKit on May 25, 2004, by Apple engineer Richard Williamson. It was reportedly developed to power Dashboard widgets in Mac OS X 10.4 Tiger.
The reaction from the web standards community was mixed. Eric Meyer reportedly responded with "What the bleeding hell?!?" expressing concerns about proprietary extensions.
Later, Ian Hickson of WHATWG organized the specification draft based on Apple's implementation. Firefox 1.5 added support in 2005, Opera in 2006, and standardization gradually progressed.
| Year | Event |
|---|---|
| May 2004 | Apple implements Canvas in WebKit |
| Aug 2004 | First WHATWG spec draft |
| Nov 2005 | Firefox 1.5 support |
| 2006 | Opera support |
| Mar 2007 | Apple claims Canvas-related patents |
| Jun 2008 | Apple releases patents under W3C royalty-free terms |
| Oct 2014 | W3C Recommendation as part of HTML5 |
toDataURL(): The First Image Export Method
The toDataURL() method, implemented around the same time as Canvas, allows you to get Canvas content as a Base64-encoded Data URL.
const canvas = document.getElementById('canvas');
const dataURL = canvas.toDataURL('image/png');
// "data:image/png;base64,iVBORw0KGgo..."
It was reportedly available as early as Safari 2.0 (2005), though the spec wasn't finalized yet and behaviors like MIME type handling may have differed from today. This enabled JavaScript to treat Canvas content as images.
However, Base64 encoding increases size by about 33% compared to the original binary data. For large images, URL length limits could sometimes be an issue.
The Arrival of WebGL (2011)
In 2009, the Khronos Group established the WebGL Working Group with participation from Apple, Google, Mozilla, Opera, and others. In March 2011, the WebGL 1.0 specification was officially released.
WebGL is a JavaScript binding for OpenGL ES 2.0, enabling GPU-accelerated 3D graphics in the browser. It's provided as a webgl context separate from Canvas's 2D context.
const canvas = document.getElementById('canvas');
const gl = canvas.getContext('webgl');
| Browser | WebGL 1.0 Support |
|---|---|
| Chrome 9 | Feb 2011 |
| Firefox 4 | Mar 2011 |
| Safari 5.1 | Jul 2011 (*) |
| IE 11 | Oct 2013 |
*Safari 5.1 had the feature implemented but disabled by default — it had to be enabled via the developer menu, so general users couldn't easily use it until later.
WebGL-rendered content is drawn on the same Canvas element, so it can be captured with toDataURL(). However, WebGL requires consideration of the preserveDrawingBuffer option.
The preserveDrawingBuffer Trap
For performance optimization, WebGL doesn't guarantee buffer contents after drawing completes. If you try to capture an image from outside the drawing function (requestAnimationFrame), the content may be empty (or black).
For example, if you call toDataURL() from a button click handler, the buffer contents are already undefined, so you'll get a black image. You can capture it immediately after shader drawing (within the same frame), but you need to be aware of timing.
If you want to freely read the buffer from JavaScript at any time, specify preserveDrawingBuffer: true when creating the context.
// Specify when creating WebGL context
const gl = canvas.getContext('webgl', {
preserveDrawingBuffer: true
});
With this option enabled, drawing buffer contents are preserved and you can always get correct images with toDataURL(). However, unless you have a clear reason like screenshot functionality, it's generally recommended to keep it false (default) due to performance impact.
By the way, "right-click save" works because the browser internally reads the buffer at the appropriate timing in the rendering pipeline.
Side Note: Browser Right-Click Save Has Improved
In the past (around 2013), there were bug reports where right-click saving WebGL canvas in Firefox would result in a black image. At the time, preserveDrawingBuffer handling varied between browsers, and some behaviors were "undefined" in the spec.
Modern browsers have improved to properly read the buffer at the compositor (screen compositing) timing, so right-click save works correctly even with preserveDrawingBuffer: false. The timing issue remains when calling toDataURL() from JavaScript, but users saving via right-click no longer need to worry about it.
Another Side Note: When Screenshots Turned Videos Black
Going further back, in Windows from the 1990s to early 2000s, pressing PrintScreen while playing video would result in just the video portion being black (or magenta or green).
This was caused by "hardware overlay." PCs at the time had limited performance, so video playback wrote directly to a dedicated video memory region, and the GPU overlaid that region's content during screen composition. The OS screenshot function couldn't recognize this overlay region and only captured the mask color (black or a specific color).
This issue was resolved after Windows Vista introduced the Desktop Window Manager (DWM). With DWM, all windows are composited by the GPU, making the overlay concept unnecessary.
toBlob(): A More Efficient Method (2013~)
The toBlob() method was added to solve the Base64 overhead problem of toDataURL().
canvas.toBlob((blob) => {
// Get as Blob object
const url = URL.createObjectURL(blob);
// Create download link
const a = document.createElement('a');
a.href = url;
a.download = 'image.png';
a.click();
}, 'image/png');
Blob handles binary data directly, so it's memory-efficient and can process large images without issues.
| Browser | toBlob Support |
|---|---|
| Firefox 19 | Feb 2013 |
| Chrome 50 | Apr 2016 |
| Safari 11 | Sep 2017 (macOS) |
*Mobile browsers (iOS Safari, etc.) may have different support timelines.
Since Chrome's support came as late as 2016, polyfills were needed for a long time. The polyfill mechanism was surprisingly simple: decode the Base64 string from toDataURL() with atob(), convert to Uint8Array, then create a Blob object with new Blob(). It was achieved by combining existing APIs.
// Basic toBlob() polyfill mechanism
function dataURLtoBlob(dataURL) {
const parts = dataURL.split(',');
const byteString = atob(parts[1]);
const mimeType = parts[0].split(':')[1].split(';')[0];
const ab = new ArrayBuffer(byteString.length);
const ia = new Uint8Array(ab);
for (let i = 0; i < byteString.length; i++) {
ia[i] = byteString.charCodeAt(i);
}
return new Blob([ab], { type: mimeType });
}
These days, you can convert in one line using the fetch() API.
const blob = await fetch(dataURL).then(r => r.blob());
Data URLs can be passed as fetch() arguments, so there's no need to manually decode Base64 anymore.
How Right-Click Save Works
The browser's "Save Image As" function also works on Canvas elements. The browser likely performs processing equivalent to toBlob() internally and exports the Canvas's current state as a PNG image (though this is just my speculation — I might be wrong).
Why PNG?
There seem to be reasons why PNG is the default for Canvas image saving.
First, PNG uses lossless compression, so Canvas content can be saved without quality degradation. JPEG uses lossy compression, so quality degrades with each save.
Second, Canvas can have transparent backgrounds, and PNG supports alpha channels (transparency). JPEG doesn't support transparency, so transparent areas become black or white.
Also, PNG is patent-free. It was originally developed to avoid GIF patent issues, making it easier to adopt as a web standard.
By the way, browser add-ons or extensions (like "Save image as Type") let you save in other formats like JPEG or WebP from the right-click menu.
For WebGL too, since framebuffer content is reflected on the Canvas element, it can be saved the same way. Millions of pixels calculated by the GPU every frame can be captured as a still image with a single right-click.
Saving as Video: captureStream() and MediaRecorder (2016)
Not just still images — it became possible to save Canvas animations as video. This was a major paradigm shift.
The Struggles Before MediaRecorder
Before 2016, turning Canvas animations into video in the browser was extremely difficult. The main methods were as follows, but none were practical.
Sending frames as DataURL to a server
Generate PNG/JPEG images with toDataURL() every frame and send them to a server to convert with FFmpeg. This required sending and receiving massive amounts of image data, creating high network and server load, and losing real-time capability.
JavaScript encoders (whammy.js, etc.)
whammy.js, which appeared around 2012, converted Canvas frames to WebP and packed them into a WebM container. However, it couldn't do inter-frame compression, resulting in enormous file sizes. It was a "better than GIF" last resort option.
GIF encoders
Libraries like gif.js could generate GIF animations, but they were limited to 256 colors and encoding was slow.
The Browser Became an Encoder
In 2016, the arrival of captureStream() and the MediaRecorder API changed everything.
The browser itself became a video encoder. Browsers now natively perform proper inter-frame compression using codecs like VP8/VP9. JavaScript just calls the API, and encoding is handled by the browser's optimized native code.
// Get stream from Canvas
const stream = canvas.captureStream(30); // 30fps
// Record with MediaRecorder
const recorder = new MediaRecorder(stream, {
mimeType: 'video/webm'
});
const chunks = [];
recorder.ondataavailable = (e) => chunks.push(e.data);
recorder.onstop = () => {
const blob = new Blob(chunks, { type: 'video/webm' });
// Download as video file
};
recorder.start();
// Stop after a few seconds
setTimeout(() => recorder.stop(), 5000);
What Changed
| Aspect | Before MediaRecorder | After MediaRecorder |
|---|---|---|
| Encoding | JS implementation (slow) | Browser native (fast) |
| Compression | No inter-frame compression | VP8/VP9 inter-frame compression |
| File size | Huge | Practical |
| Real-time | Difficult | Possible |
| Server dependency | Often required | Completely client-side |
The background for these APIs was WebRTC.
In 2010, Google acquired GIPS (a VoIP and video conferencing software company) and obtained codec and echo cancellation technologies. In 2011, they open-sourced these as WebRTC. The goal was "enabling real-time video calls in the browser without plugins." At the time, video calls on Skype and Facebook required Flash or plugins.
For WebRTC, getUserMedia() to handle video/audio from cameras and microphones, and the MediaStream interface to handle them uniformly, were created. captureStream() is an extension that allows generating MediaStream from Canvas elements too. In other words, the real-time video processing infrastructure built for video calls was opened up to Canvas as well.
Being able to treat Canvas-drawn content as a "stream" enabled not just recording but also live streaming via WebRTC.
| Browser | captureStream Support |
|---|---|
| Firefox 43 | Dec 2015 |
| Chrome 52 | Jul 2016 |
| Safari 11 | Sep 2017 |
How to Save Videos
Unlike still images, videos can't be saved with right-click. You need to explicitly write download processing in JavaScript to download the recorded Blob.
recorder.onstop = () => {
const blob = new Blob(chunks, { type: 'video/webm' });
// Generate URL from Blob
const url = URL.createObjectURL(blob);
// Create download link and click
const a = document.createElement('a');
a.href = url;
a.download = 'recording.webm';
a.click();
// Release URL when done
URL.revokeObjectURL(url);
};
This means you don't have the convenience of "right-click to save" like with still images — the application needs to implement a save UI. This is a major difference from still image saving.
Summary
| Year | Technology | What Became Possible |
|---|---|---|
| 2004 | Canvas + toDataURL | Get 2D graphics as Data URL |
| 2011 | WebGL | GPU-accelerated 3D graphics |
| 2013~2016 | toBlob | Efficient image retrieval as Blob |
| 2016 | captureStream + MediaRecorder | Recording and saving as video |
| 2018~ | OffscreenCanvas | Drawing and image generation in Web Workers |
The element Apple created 20 years ago for Dashboard widgets has evolved to the point where we can render 3D fractals calculated by the GPU in real-time and save them as still images or videos.
Recently, OffscreenCanvas allows running heavy drawing processing on Web Workers without blocking the main thread, and generating images directly. Canvas has finally been freed from the main thread.
Furthermore, WebGPU is being standardized as WebGL's successor. With lower-level GPU access and WGSL, a new shader language, browser graphics expression still has room to evolve.
The "right-click save" we use without thinking twice is built on layers of Canvas, WebGL, Blob, and various other APIs — I found that quite fascinating.
Top comments (0)