DEV Community

ndesmic
ndesmic

Posted on

Graphing with Web Components 5: Web GPU

I originally planned to do the original series applying the same API to 4 different types of drawing: SVG, Canvas, WebGL and CSS. But now that I'm doing more exploration with WebGPU I thought I might as well compare that too. At the time of this writing WebGPU is not stable but can be found behind flags in Chromium browser so we can build an implementation and hopefully it won't change too much. Since I already wrote at length on the basics I won't touch on everything and pickup assuming you've read that post (there's simply too much to cover).

Boilerplate

Again, we start with the same basic boilerplate:

function hyphenCaseToCamelCase(text) {
    return text.replace(/-([a-z])/g, g => g[1].toUpperCase());
}

class WcGraphWgpu extends HTMLElement {
    #points = [];
    #colors = [];
    #width = 320;
    #height = 240;
    #xmax = 100;
    #xmin = -100;
    #ymax = 100;
    #ymin = -100;
    #func;
    #step = 1;
    #thickness = 1;
    #continuous = false;

    #defaultSize = 4;
    #defaultColor = [1, 0, 0, 1];

    static observedAttributes = ["points", "func", "step", "width", "height", "xmin", "xmax", "ymin", "ymax", "default-size", "default-color", "continuous", "thickness"];
    constructor() {
        super();
        this.bind(this);
    }
    bind(element) {
        element.attachEvents.bind(element);
    }
    connectedCallback() {
        this.attachShadow({ mode: "open" });
        this.canvas = document.createElement("canvas");
        this.shadowRoot.appendChild(this.canvas);
        this.canvas.height = this.#height;
        this.canvas.width = this.#width;
        this.context = this.canvas.getContext("webgpu");

        this.render();
        this.attachEvents();
    }
    render() {

    }
    attachEvents() {

    }
    attributeChangedCallback(name, oldValue, newValue) {
        this[hyphenCaseToCamelCase(name)] = newValue;
    }
    set points(value) {
        if (typeof (value) === "string") {
            value = JSON.parse(value);
        }

        this.#points = value.map(p => [
            p[0],
            p[1],
            p[2] ?? this.#defaultColor[0],
            p[3] ?? this.#defaultColor[1],
            p[4] ?? this.#defaultColor[2],
            p[5] ?? this.#defaultColor[3]
        ]).flat();

        this.render();
    }
    get points() {
        return this.#vertices;
    }
    set width(value) {
        this.#width = parseFloat(value);
    }
    get width() {
        return this.#width;
    }
    set height(value) {
        this.#height = parseFloat(value);
    }
    get height() {
        return this.#height;
    }
    set xmax(value) {
        this.#xmax = parseFloat(value);
    }
    get xmax() {
        return this.#xmax;
    }
    set xmin(value) {
        this.#xmin = parseFloat(value);
    }
    get xmin() {
        return this.#xmin;
    }
    set ymax(value) {
        this.#ymax = parseFloat(value);
    }
    get ymax() {
        return this.#ymax;
    }
    set ymin(value) {
        this.#ymin = parseFloat(value);
    }
    get ymin() {
        return this.#ymin;
    }
    set func(value) {
        this.#func = new Function(["x"], value);
        this.render();
    }
    set step(value) {
        this.#step = parseFloat(value);
    }
    set defaultSize(value) {
        this.#defaultSize = parseFloat(value);
    }
    set defaultColor(value) {
        if (typeof (value) === "string") {
            this.#defaultColor = JSON.parse(value);
        } else {
            this.#defaultColor = value;
        }
    }
    set continuous(value) {
        this.#continuous = value !== undefined;
    }
    set thickness(value) {
        this.#thickness = parseFloat(value);
    }
}

customElements.define("wc-graph-wgpu", WcGraphWgpu);
Enter fullscreen mode Exit fullscreen mode

It's pretty much the same skeleton as WebGL except the context type is webgpu. The points are a little different. It's much easier to work with flat arrays since that's how we'll be passing them, so we do all of that work in the attribute setter.

Setting up the rendering pipeline

Initial setup

First we'll start by setting up some of the initial stuff that doesn't change per render:

#dom;
#context;
#device;
#vertexBufferDescriptor;

async connectedCallback() {
    this.cacheDom()
    await this.setupGpu();
    this.render();
    this.attachEvents();
}
cacheDom(){
    this.attachShadow({ mode: "open" });
    this.#dom = {};
    this.#dom.canvas = document.createElement("canvas");
    this.shadowRoot.appendChild(this.#dom.canvas);
    this.#dom.canvas.height = this.#height;
    this.#dom.canvas.width = this.#width;
}
async setupGpu() {
    const adapter = await navigator.gpu.requestAdapter();
    this.#device = await adapter.requestDevice();
    this.#context = this.#dom.canvas.getContext("webgpu");
    this.#context.configure({
        device: this.#device,
        format: "bgra8unorm"
    });
    this.#vertexBufferDescriptor = [{
        attributes: [
            {
                shaderLocation: 0,
                offset: 0,
                format: "float32x2"
            },
            {
                shaderLocation: 1,
                offset: 8,
                format: "float32x4"
            }
        ],
        arrayStride: 24,
        stepMode: "vertex"
    }];
}
Enter fullscreen mode Exit fullscreen mode

Nothing too weird here. We're doing some basic DOM setup and getting a WebGPU device a associating it with the canvas. The vertextBufferDescriptor sets up the vertex buffer format. The first 2 float32s are the location, the next 4 are the color.

The pipeline

#shaderModule;
#renderPipeline;

//call this after `setupGpu` in connectedCallback
async loadShaderPipeline() {
    this.#shaderModule = this.#device.createShaderModule({
        code: `
            <placeholder>
        `
    });
    const pipelineDescriptor = {
        vertex: {
            module: this.#shaderModule,
            entryPoint: "vertex_main",
            buffers: this.#vertexBufferDescriptor
        },
        fragment: {
            module: this.#shaderModule,
            entryPoint: "fragment_main",
            targets: [
                {
                    format: "bgra8unorm"
                }
            ]
        },
        primitive: {
            topology: "point-list"
        }
    };
    this.#renderPipeline = this.#device.createRenderPipeline(pipelineDescriptor);
}
Enter fullscreen mode Exit fullscreen mode

The pipeline describes how vertices are made into images. The most interesting thing here is the shader module which is where we supply the shader code (we always use the same shader so this just has to be done once). We'll discuss the shader it's own section so skip that for now. Next is the pipeline descriptor. We use the shader module, and vertexBufferDescriptor from the last step to construct it. The entrypoints are referencing shader code function names that doesn't exist yet, just know that it will match our eventual function names. The shader outputs "brga8unorm" or RGB color matching the canvas configuration. Finally the primitive we're using is a point-list which is a list of points similar, but as we'll find not like gl.POINTS we used in the WebGL version.

Vertex mapping

render(){
  const vertexBuffer = this.#device.createBuffer({
    size: this.#points.length * 4,
    usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
    mappedAtCreation: true
  });
  new Float32Array(vertexBuffer.getMappedRange()).set(this.#points);
  vertexBuffer.unmap();

  //more to come...
}
Enter fullscreen mode Exit fullscreen mode

We start creating and writing the points to a buffer. The array is flattened so the total size of the buffer is length * 4 because each element is a float32 (each set of 6 values in the buffer describes one vertex).

render(){
        //...write to vertex buffer (above)
    const clearColor = { r: 0.0, g: 0.5, b: 1.0, a: 1.0 };
    const renderPassDescriptor = {
        colorAttachments: [
            {
                loadValue: clearColor,
                storeOp: "store",
                view: this.#context.getCurrentTexture().createView()
            }
        ]
    };
        const commandEncoder = this.#device.createCommandEncoder();
    const passEncoder = commandEncoder.beginRenderPass(renderPassDescripto
    passEncoder.setPipeline(this.#renderPipeline);
    passEncoder.setVertexBuffer(0, vertexBuffer);
    passEncoder.draw(this.#points.length / 6);
    passEncoder.endPass();
    this.#device.queue.submit([commandEncoder.finish()]);
}
Enter fullscreen mode Exit fullscreen mode

The clear color is the background color (a cornflow blue). The pass descriptor uses the base color and outputs to the canvas context. Next, we create the command encoder to encode the draw instructions. Using our vertex buffer and render pipeline from the last steps when create the pass. We're draw this.#points.length / 6 elements because each element has 6 values and we've flattened the array. Finally we submit the encoded instructions to the device queue and it will draw it.

The shader

At this point it should at least not error, but it won't draw anything be cause the shader is invalid. Let's fix that.

struct VertexOut {
    [[builtin(position)]] position : vec4<f32>;
    [[location(0)]] color : vec4<f32>;
};
[[stage(vertex)]]
fn vertex_main([[location(0)]] position: vec2<f32>, [[location(1)]] color: vec4<f32>) -> VertexOut
{
    var output : VertexOut;
    output.position = vec4<f32>(position, 0.0, 1.0);
    output.color = color;
    return output;
}
[[stage(fragment)]]
fn fragment_main(fragData: VertexOut) -> [[location(0)]] vec4<f32>
{
    return fragData.color;
}
Enter fullscreen mode Exit fullscreen mode

This is basically a no-op shader that takes in values as defined by our vertex buffer descriptor. It takes the 2d points, turns them in to 4d (as the built-in position takes 4-value elements) and passes through the color which is output by the fragment shader.

Testing the pipeline

Now we have a full pipeline in place. Let's give it a test. Keep in mind we haven't yet dealt with the viewport scaling so we'll keep the points within -1 to 1 so they show up. I chose [[-0.5,-0.5], [0.5,-0.5], [0.5,0.5], [-0.5,0.5]]

Screenshot 2021-08-28 105818

Hmmm...it's not working. Or is it? Zoom in the the top right:

Screenshot 2021-08-28 105908

Well now, it appears that point-list works but it only gives us 1-pixel points. Unlike gl.POINTS we can't even change the size so this is fairly useless. Oh well.

We can also allow for continuous mode:

const pipelineDescriptor = {
    vertex: {
        module: this.#shaderModule,
        entryPoint: "vertex_main",
        buffers: this.#vertexBufferDescriptor
    },
    fragment: {
        module: this.#shaderModule,
        entryPoint: "fragment_main",
        targets: [
            {
                format: "bgra8unorm"
            }
        ]
    }
};
if(this.#continuous){
    pipelineDescriptor.primitive = {
        topology: "line-strip",
        stripIndexFormat: "uint16"
    }
} else {
    pipelineDescriptor.primitive = {
        topology: "point-list"
    }
}
Enter fullscreen mode Exit fullscreen mode

If it's continuous then we can use a line-strip topology instead. However we also need to provide the index format. I really don't know why this is needed. Just so you can save a couple bytes on the pointer size? But you do need it and you can choose a 16 or 32 bit unsigned integer. In continuous mode it looks like this:

line-strip

We can see the lines, but yet again we can't change the width of them (WebGL also had this problem).

Scaling to the viewport

Again we can use the inverse lerp from the WebGL version. It's expressed like this in WGLSL:

fn inverse_lerp(a: f32, b: f32, v: f32) -> f32{
    return (v-a)/(b-a);
}
Enter fullscreen mode Exit fullscreen mode

We also need to get the bounds of the canvas. We'll be passing that in so we need to create a struct to hold the data in the shader:

[[block]]
struct Bounds {
    left: f32;
    right: f32;
    top: f32;
    bottom: f32;
};
Enter fullscreen mode Exit fullscreen mode

The [[block]] is necessary for binding and the 4 properties of the struct are f32s (mind that final semi colon, it's required!). We then need to create the actual variable that will hold the bounds and be bound to:

[[group(0), binding(0)]] var<uniform> bounds: Bounds;
Enter fullscreen mode Exit fullscreen mode

It's a variable of type Bounds but annotated so that it accepts bindings from bind group 0. The <uniform> is also necessary I guess because the shader needs to know it's a uniform (basically a bound constant for the shader program).

In render we can setup the binding the same way we did the vertex buffer we need to encode the struct as a buffer.

const boundsBuffer = this.#device.createBuffer({
    size: 16,
    usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST,
        mappedAtCreation: true
});
new Float32Array(boundsBuffer.getMappedRange()).set([this.#xmin, this.#xmax, this.#ymin, this.#ymax]);
boundsBuffer.unmap();
Enter fullscreen mode Exit fullscreen mode

The difference is we are using a usage type of UNIFORM. We also create the bindgroup:

const boundsGroup = this.#device.createBindGroup({
    layout: this.#renderPipeline.getBindGroupLayout(0),
    entries: [
        {
            binding: 0,
            resource: {
                buffer: boundsBuffer,
                offset: 0,
                size: 16
            }
        }
    ]
});
Enter fullscreen mode Exit fullscreen mode

The layout index corresponds to the group(0) annotation, the binding index corresponds to the binding(0) annotation. We'll stuff the 4 f32 values into a buffer and pulling them out as a struct. No offset necessary, the size is the exact size of the buffer.

We need to let the pipeline know we're using this:

passEncoder.setBindGroup(0, boundsGroup);
Enter fullscreen mode Exit fullscreen mode

I put this directly after setVertexBuffer. The 0 is the layout index.

Finally in the shader body we can change the position to use the scaling operation:

output.position = vec4<f32>(
    mix(-1.0, 1.0, inverse_lerp(bounds.left, bounds.right,  position[0])),
    mix(-1.0, 1.0, inverse_lerp(bounds.top, bounds.bottom, position[1])),
    0.0, 
    1.0
);
Enter fullscreen mode Exit fullscreen mode

WGSL has a mix function just like GLSL (too bad they didn't take the opportunity to change the name to lerp...).

Drawing the guides

To complete the graph we draw the cross-hair guides. While I could have used the same trick as in the WebGL version of creating sets of points and re-using the same shader I thought I'd spell it out this time to get some more WebGPU practice.

Since this is a different draw pass we'll give it it's own shader. I set everything up for the this pass once:

setupGuidePipeline(){
    const shaderModule = this.#device.createShaderModule({
        code: `
            struct VertexOut {
                [[builtin(position)]] position : vec4<f32>;
            };
            [[stage(vertex)]]
            fn vertex_main([[location(0)]] position: vec2<f32>) -> VertexOut
            {
                var output : VertexOut;
                output.position = vec4<f32>(
                    position,
                    0.0, 
                    1.0
                );
                return output;
            }
            [[stage(fragment)]]
            fn fragment_main(fragData: VertexOut) -> [[location(0)]] vec4<f32>
            {
                return vec4<f32>(0.0, 0.0, 0.0, 1.0);
            }
        `
    });
    const vertexBufferDescriptor = [{
        attributes: [
            {
                shaderLocation: 0,
                offset: 0,
                format: "float32x2"
            }
        ],
        arrayStride: 8,
        stepMode: "vertex"
    }];
    const pipelineDescriptor = {
        vertex: {
            module: shaderModule,
            entryPoint: "vertex_main",
            buffers: vertexBufferDescriptor
        },
        fragment: {
            module: shaderModule,
            entryPoint: "fragment_main",
            targets: [
                {
                    format: "bgra8unorm"
                }
            ]
        },
        primitive: {
            topology: "line-list"
        }
    };
    this.#guidePipeline = this.#device.createRenderPipeline(pipelineDescriptor);
    this.#guideVertexBuffer = this.#device.createBuffer({
        size: 32,
        usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
        mappedAtCreation: true
    });
    new Float32Array(this.#guideVertexBuffer.getMappedRange()).set([0.0, -1, 0.0, 1, -1, 0.0, 1, 0.0]);
    this.#guideVertexBuffer.unmap();
}
Enter fullscreen mode Exit fullscreen mode

The shader code shouldn't be surprising, it takes 2d positions and draws a black line. As such the vertex buffer descriptor is simple too, just a single 2d (2xf32) point per vertex. The pipeline descriptor should also be familiar, it's the same one we've been using but with a topology of line-list. A line list is what it sounds like. Each set of the 2 vertices is a line. We need to draw 2 lines so we need a vertex buffer with 4 points, which are the coordinates of the start and end of the horizontal and vertical lines in screen space (-1 to 1). We'll save this whole pipeline as #guidePipeline to be used later in the render step along with the #guideVertexBuffer. We just need to do this once as we can reuse both of these for every render.

In render we only need to add a few things. In order to to draw different objects we need to make separate calls to draw on the passEncoder.

const passEncoder = commandEncoder.beginRenderPass(renderPassDescriptor);

//draw guides
passEncoder.setPipeline(this.#guidePipeline);
passEncoder.setVertexBuffer(0, this.#guideVertexBuffer);
passEncoder.draw(4);

//draw points
passEncoder.setPipeline(this.#renderPipeline);
passEncoder.setVertexBuffer(0, vertexBuffer);
passEncoder.setBindGroup(0, boundsGroup);
passEncoder.draw(points.length / 6);

passEncoder.endPass();
Enter fullscreen mode Exit fullscreen mode

So we just get a single pass encoder, set the pipeline and vertices and then call draw for the guides, and then again for the points (points also set a bind group for the bounds). Then we call endPass. The rest is the exact same as last time.

Conclusion

So this leaves us in an interesting spot. Like WebGL we lack certain features that allow us to do things like change the points and lines. In fact, WebGPU is worse because we can't even control the point size. It's even more verbose too and at the time of writing no browser supports WebGPU in the stable channel. All-in-all it's perhaps not a great choice for this but it's interesting to compare and contrast. However, I'd also like to look into the ways in which we can get get things like line thickness and point shape in the future as it feels a little disappointing to end here.

Top comments (2)

Collapse
 
ksnyde profile image
Ken Snyder

Oh that's a disappointing end to the WebGPU chapter. I wasn't expecting it to be usable in browsers just yet (without flags that is) but OpenGL is clearly a dying animal. I'm new to all of this but found your articles useful. I do have one question though ... surely there's a way in WebGPU to animate lines/bars/etc when data changes, isn't there?

Collapse
 
ndesmic profile image
ndesmic • Edited

I haven't really written anything about animation so far. But yes animation is possible, but it's all manual for canvas/webGL/webGPU. You'd basically draw the graph each frame. So you might use something like requestAnimationFrame and draw each frame each time moving/scaling/rotating etc by a little bit. Say you want a bar to grow from 0 to it's final value. If you know how long you want the animation to last (eg 0.5seconds) you can use a lerp to find the ratio of the time that has elapsed and then multiply that with the final value to get the intermediate value for that frame. CSS is much easier since all that is built in.