DEV Community

Mingming Ma
Mingming Ma

Posted on

Going down the rabbit hole of the canvas

Recently, I encountered an issue on ChatCraft that required me to adjust the size of an image. If it needs to be done on the frontend (browser), it involves the use of canvas. In simple terms, this process can be explained using the example below.

//an example of how you might use a canvas to 
//compress an image using JavaScript:
function compressImage(image, outputFormat, quality, callback) {
  // Create a canvas element
  const canvas = document.createElement('canvas');
  const ctx = canvas.getContext('2d');

  // Set the canvas size to the desired output size
  canvas.width = image.width / 2; // Example: reduce width by half
  canvas.height = image.height / 2; // Example: reduce height by half

  // Draw the image onto the canvas
  ctx.drawImage(image, 0, 0, canvas.width, canvas.height);

  // Convert the canvas to a Blob (binary large object)
  canvas.toBlob(function(blob) {
    // The blob is a compressed version of the image
    callback(blob);
  }, outputFormat, quality);
}

// Usage example
const image = new Image();
image.src = 'path/to/your/image.jpg'; // Load your image here
image.onload = function() {
  compressImage(image, 'image/jpeg', 0.7, function(compressedBlob) {
    // Do something with the compressed image blob
    console.log('Compressed image size:', compressedBlob.size);
  });
};
Enter fullscreen mode Exit fullscreen mode

I was fascinated by this approach, prompting me to commence my exploration into the depths of canvas.

In the example above, let's see these two lines:

const ctx = canvas.getContext('2d');
ctx.drawImage(image, 0, 0, canvas.width, canvas.height);
Enter fullscreen mode Exit fullscreen mode

First, the canvas.getContext('2d') is using the HTMLCanvasElement's method: HTMLCanvasElement.getContext(), :

A drawing context lets you draw on the canvas. Calling getContext with "2d" returns a CanvasRenderingContext2D object, whereas calling it with "webgl" (or "experimental-webgl") returns a WebGLRenderingContext object. This context is only available on browsers that implement WebGL.

So, ctx.drawImage(image, 0, 0, canvas.width, canvas.height) is using the CanvasRenderingContext2D's method drawImage()

drawImage(image, dx, dy)
drawImage(image, dx, dy, dWidth, dHeight)
drawImage(image, sx, sy, sWidth, sHeight, dx, dy, dWidth, dHeight)
Enter fullscreen mode Exit fullscreen mode

usage

  • ctx.drawImage(image, dx, dy) - Draws the image at its natural size at the specified position on the canvas.
  • ctx.drawImage(image, dx, dy, dWidth, dHeight) - Draws the image, scaling it to fit the specified width and height at the specified position on the canvas.
  • ctx.drawImage(image, sx, sy, sWidth, sHeight, dx, dy, dWidth, dHeight) - Draws a specified area of the source image to a specified area on the canvas, allowing for both cropping and scaling.

Then, I found this gem: an article The HTML5 Canvas Handbook Although it was written in 2013, I found it still applicable today. It is easier for beginners to understand various aspects compared to directly reading the MDN. I highly recommend it.

In the 2.2.11 Drawing images section, you can find straightforward examples. The author provides a series of images to demonstrate the usage of drawImage.
differect

Next, I tried to find the source code for drawImage, and I found the CanvasRenderingContext2D.cpp from the Blink, which is the browser engine developed as part of the Chromium project.

void CanvasRenderingContext2D::drawImage(const CanvasImageSourceUnion& imageSource, float x, float y, ExceptionState& exceptionState)
{
    CanvasImageSource* imageSourceInternal = toImageSourceInternal(imageSource);
    FloatSize sourceRectSize = imageSourceInternal->elementSize();
    FloatSize destRectSize = imageSourceInternal->defaultDestinationSize();
    drawImage(imageSourceInternal, 0, 0, sourceRectSize.width(), sourceRectSize.height(), x, y, destRectSize.width(), destRectSize.height(), exceptionState);
}
void CanvasRenderingContext2D::drawImage(const CanvasImageSourceUnion& imageSource,
    float x, float y, float width, float height, ExceptionState& exceptionState)
{
    CanvasImageSource* imageSourceInternal = toImageSourceInternal(imageSource);
    FloatSize sourceRectSize = imageSourceInternal->elementSize();
    drawImage(imageSourceInternal, 0, 0, sourceRectSize.width(), sourceRectSize.height(), x, y, width, height, exceptionState);
}
void CanvasRenderingContext2D::drawImage(const CanvasImageSourceUnion& imageSource,
    float sx, float sy, float sw, float sh,
    float dx, float dy, float dw, float dh, ExceptionState& exceptionState)
{
    CanvasImageSource* imageSourceInternal = toImageSourceInternal(imageSource);
    drawImage(imageSourceInternal, sx, sy, sw, sh, dx, dy, dw, dh, exceptionState);
}
Enter fullscreen mode Exit fullscreen mode

And all three functions will be calling:

void CanvasRenderingContext2D::drawImage(CanvasImageSource* imageSource,
    float sx, float sy, float sw, float sh,
    float dx, float dy, float dw, float dh, ExceptionState& exceptionState)
{
    if (!drawingCanvas())
        return;
    RefPtr<Image> image;
    SourceImageStatus sourceImageStatus = InvalidSourceImageStatus;
    if (!imageSource->isVideoElement()) {
        image = imageSource->getSourceImageForCanvas(&sourceImageStatus);
        if (sourceImageStatus == UndecodableSourceImageStatus)
            exceptionState.throwDOMException(InvalidStateError, "The HTMLImageElement provided is in the 'broken' state.");
        if (!image || !image->width() || !image->height())
            return;
    } else {
        if (!static_cast<HTMLVideoElement*>(imageSource)->hasAvailableVideoFrame())
            return;
    }
    if (!std::isfinite(dx) || !std::isfinite(dy) || !std::isfinite(dw) || !std::isfinite(dh)
        || !std::isfinite(sx) || !std::isfinite(sy) || !std::isfinite(sw) || !std::isfinite(sh)
        || !dw || !dh || !sw || !sh)
        return;
    FloatRect srcRect = normalizeRect(FloatRect(sx, sy, sw, sh));
    FloatRect dstRect = normalizeRect(FloatRect(dx, dy, dw, dh));
    clipRectsToImageRect(FloatRect(FloatPoint(), imageSource->elementSize()), &srcRect, &dstRect);
    imageSource->adjustDrawRects(&srcRect, &dstRect);
    if (srcRect.isEmpty())
        return;
    if (shouldDisableDeferral(imageSource) || image->imageForCurrentFrame()->isTextureBacked())
        canvas()->disableDeferral();
    validateStateStack();
    draw(
        [this, &imageSource, &image, &srcRect, dstRect](SkCanvas* c, const SkPaint* paint) // draw lambda
        {
            drawImageInternal(c, imageSource, image.get(), srcRect, dstRect, paint);
        },
        [this, &dstRect](const SkIRect& clipBounds) // overdraw test lambda
        {
            return rectContainsTransformedRect(dstRect, clipBounds);
        }, dstRect, CanvasRenderingContext2DState::ImagePaintType,
        imageSource->isOpaque() ? CanvasRenderingContext2DState::OpaqueImage : CanvasRenderingContext2DState::NonOpaqueImage);
    validateStateStack();
    bool isExpensive = false;
    if (ExpensiveCanvasHeuristicParameters::SVGImageSourcesAreExpensive && image && image->isSVGImage())
        isExpensive = true;
    if (imageSource->elementSize().width() * imageSource->elementSize().height() > canvas()->width() * canvas()->height() * ExpensiveCanvasHeuristicParameters::ExpensiveImageSizeRatio)
        isExpensive = true;
    if (isExpensive) {
        ImageBuffer* buffer = canvas()->buffer();
        if (buffer)
            buffer->setHasExpensiveOp();
    }
    if (imageSource->isCanvasElement() && static_cast<HTMLCanvasElement*>(imageSource)->is3D()) {
        // WebGL to 2D canvas: must flush graphics context to prevent a race
        // FIXME: crbug.com/516331 Fix the underlying synchronization issue so this flush can be eliminated.
        canvas()->buffer()->flushGpu();
    }
    if (canvas()->originClean() && wouldTaintOrigin(imageSource))
        canvas()->setOriginTainted();
}
Enter fullscreen mode Exit fullscreen mode

This is a lengthy function, but we can observe one crucial call:

 drawImageInternal(c, imageSource, image.get(), srcRect, dstRect, paint);
Enter fullscreen mode Exit fullscreen mode

And in drawImageInternal function, we can see:

void CanvasRenderingContext2D::drawImageInternal(SkCanvas* c, CanvasImageSource* imageSource, Image* image, const FloatRect& srcRect, const FloatRect& dstRect, const SkPaint* paint)
{
    int initialSaveCount = c->getSaveCount();
    SkPaint imagePaint = *paint;
    if (paint->getImageFilter()) {
        SkMatrix ctm = c->getTotalMatrix();
        SkMatrix invCtm;
        if (!ctm.invert(&invCtm)) {
            // There is an earlier check for invertibility, but the arithmetic
            // in AffineTransform is not exactly identical, so it is possible
            // for SkMatrix to find the transform to be non-invertible at this stage.
            // crbug.com/504687
            return;
        }
        c->save();
        c->concat(invCtm);
        SkRect bounds = dstRect;
        ctm.mapRect(&bounds);
        SkRect filteredBounds;
        paint->getImageFilter()->computeFastBounds(bounds, &filteredBounds);
        SkPaint layerPaint;
        layerPaint.setXfermode(paint->getXfermode());
        layerPaint.setImageFilter(paint->getImageFilter());
        c->saveLayer(&filteredBounds, &layerPaint);
        c->concat(ctm);
        imagePaint.setXfermodeMode(SkXfermode::kSrcOver_Mode);
        imagePaint.setImageFilter(nullptr);
    }
    if (!imageSource->isVideoElement()) {
        imagePaint.setAntiAlias(shouldDrawImageAntialiased(dstRect));
        image->draw(c, imagePaint, dstRect, srcRect, DoNotRespectImageOrientation, Image::DoNotClampImageToSourceRect);
    } else {
        c->save();
        c->clipRect(dstRect);
        c->translate(dstRect.x(), dstRect.y());
        c->scale(dstRect.width() / srcRect.width(), dstRect.height() / srcRect.height());
        c->translate(-srcRect.x(), -srcRect.y());
        HTMLVideoElement* video = static_cast<HTMLVideoElement*>(imageSource);
        video->paintCurrentFrame(c, IntRect(IntPoint(), IntSize(video->videoWidth(), video->videoHeight())), &imagePaint);
    }
    c->restoreToCount(initialSaveCount);
}
Enter fullscreen mode Exit fullscreen mode

In the above function, this line

        c->scale(dstRect.width() / srcRect.width(), dstRect.height() / srcRect.height());
Enter fullscreen mode Exit fullscreen mode

tells us it will use SkCanvas's scale method. You will notice that the SkCanvas is actually coming from third party lib Skia

//CanvasRenderingContext2D.cpp
#include "third_party/skia/include/core/SkCanvas.h"
Enter fullscreen mode Exit fullscreen mode

Skia is an open-source 2D graphics library written in C++ and serves as the graphics engine for Google Chrome and ChromeOS, Android, Flutter, and many other products. Here is the source code of scale function:

void SkCanvas::scale(SkScalar sx, SkScalar sy) {
    if (sx != 1 || sy != 1) {
        this->checkForDeferredSave();
        fMCRec->fMatrix.preScale(sx, sy);

        this->topDevice()->setGlobalCTM(fMCRec->fMatrix);

        this->didScale(sx, sy);
    }
}
Enter fullscreen mode Exit fullscreen mode

If you like, you can continue dig out how the scaling transformation is applied to the matrix in preScale function.

In terms of the pixel grid on a computer screen or canvas, the scaling transformation affects how the drawing commands are mapped to the actual pixels. Simply terms in general:

process

Original Coordinates: Each shape or image has coordinates that determine its position and size on the canvas. These coordinates are based on a logical coordinate system, not directly tied to the physical pixels.

Transformation Matrix: When a scaling transformation is applied, the transformation matrix is updated to include the scaling factors. This matrix is used to transform the logical coordinates of the shapes or images.

Rasterization: During rasterization, the transformed logical coordinates are converted to pixel coordinates. The rendering engine uses the transformation matrix to determine where and how to place each shape or image on the pixel grid.

Pixel Mapping: If an object is scaled up, more pixels are used to represent it, making it appear larger on the screen. Conversely, if an object is scaled down, fewer pixels represent it, making it appear smaller.

Interpolation: When scaling images, interpolation algorithms (like nearest-neighbor, bilinear, or bicubic interpolation) determine the color values of the new pixels based on the original pixels to create a smooth transition and minimize artifacts.

I stopped further tracking of the source code. As an end of this blog post, I would like to share my finding of the React Native Skia which brings the Skia Graphics Library to React Native. You can get more information from a Shopify Engineering blog Getting Started with React Native Skia. Hope it helps, see you next post.

Top comments (0)