Introduction
Hi everyone! In this new article, I would like to reconsider Voronoi generation in WebGPU but this time from a procedural texture generation perspective as opposed to the original technique I mentioned in a previous youtube recording where I was rendering actual polygons and lines and running a full Delaunay triangulation stage on each frame. And which you can check here actually:
Here is a small overview of the final result we will achieve in this tutorial:
- We will see how to display different colors for the different cells,
- How to display a circular dot at the center of each cell,
- And how to apply time animation on the full voronoi diagram to get this constantly moving effect.
- We'll even see how we can dynamically change the size of the diagram
- Yet before that of course we will need to cover some simple theory and preliminary computation steps to really understand how this algorithm works.
Before we start
Now before we dive any deeper, I have to mention here that I will roughly be following the same logic as in this ShaderToy Voronoi example by Inigo Quilez
But well, understanding this base code might not be so easy if you are not familiar with the algorithm yet, so we'll provide some explanations in the process.
Anyway, if you don't know Inigo Quilez yet, I would also highly suggest you pay a visit to his website and try to read as much as possible from there, specially the technical articles, because, honestly, this guy is a genius ๐!
Also, just like the last time, I have added a dedicated folder in the NervLandAdventures github repository for this tutorial where you will find the incremental versions of the compute shader we will create here in case you would like to have more careful look at these.
And now, let's get started!
Setting up our Display Surface
The first thing we need to do for this project is to setup a simple surface on which we will render our generated texture.
As before, I will be using a minimal scene construction inside the TerrainView app as our playground, and here is the YAML element I used for this:
voronoi_surface:
type: ComputeSurfaceBlueprint
node: quads0
transform: subject_pos
anchor: bottom | center
shader: texture/voronoi
texture_width: 1920
texture_height: 1080
width: 6.0
aspect: 16/9
The ComputeSurfaceBlueprint
is a new type of element I just introduced, which can be used to specify an arbitrary Compute Shader that will generate the texture to display on the surface quad.
In this case for instance, we will use the shader from the file texture/voronoi.wgsl which we will get to in just a moment.
Since we are generating this texture from nothing, note that we also specify the dimension of the output texture in this config: the texture_width
and texture_height
Note: that those texture dimensions are not directly related to the width
and height
parameters that can be provided to specify the physical size of the quad in the world.
In the ComputeSurfaceBlueprint
class we then proceed in a similar way to what was done in the VideoSurfaceBlueprint
described in our previous video tutorial (cf. https://www.youtube.com/watch?v=P1jxvLm6SwE): we will allocate a dedicated area on our texture atlas to hold the resulting texture data, and then we will setup some kind of processing to write valid data in that area, and this content will be shown on the final quad with the same mechanism as in the previous recording.
So, first we allocate an area with the appropriate size in our main texture atlas with this code:
logDEBUG("Creating target BSP area of size {}x{}", _textureWidth,
_textureHeight);
auto area = bsp.add_area((U32)_textureWidth, (U32)_textureHeight);
NVCHK(area.is_valid(), "Invalid BSP area.");
And then we create a ComputeNode
with the provided shader, and setup the only binding group we will need for this shader execution:
logDEBUG("Building compute node for {}", _shader);
auto cnode = WGPUComputeNode::create(_shader);
struct Params {
Vec3u textureSize;
U32 pad0{0};
Vec3u origin;
U32 pad1{0};
};
UniformBuffer<Params> params{
Params{.textureSize = Vec3u(_textureWidth, _textureHeight, 1),
.origin = Vec3u(area.rect.xmin, area.rect.ymin, area.layer)}};
auto& tex = bsp.get_texture();
auto bgrp0 = cnode->setup_bind_grp({
params.as_ubo(),
stoTex2dArray(tex),
});
Of course, the code above is in the NervLand engine so not using pure native WGPU API, but anyway I think you should be able to grasp the main ideas:
- The Params struct is used to provide a uniform struct on the shader side: as we need to provide it with the size of the texture we want to generate, and to origin point in the texture atlas where we should start writing the output data.
- And next to build the bind group we provide those params as what I call a "uniform buffer object" in the engine, and the underlying texture 2D array from the texture atlas as a writable storage texture 2d array.
Next, we will prepare an actual dispatch for the compute node and the binding group we just created:
// Need to figure out here how many dispatches to do:
auto nx = (_textureWidth + 15U) / 16U;
auto ny = (_textureHeight + 15U) / 16U;
cnode->add({nx, ny}, {bgrp0});
scene.get_compute_pass(id_prerender_cpass).add(*cnode);
You see that here we use the texture dimensions to compute the dispatch sizes, and we assume a workgroup size of 16x16x1 in the shader.
And finally we add this compute node to the "prerender compute pass", which is a specific compute pass in my engine executed once at the beginning of each render cycle and before any other rendering steps.
Initial compute shader
Now, let's take a look a the initial version of the voronoi shader:
struct ComputeParams {
// Size of the texture to generate
textureSize: vec3u,
padding0: u32,
// Origin point of the texture data in the output texture storage
origin: vec3u,
padding1: u32,
};
@group(0) @binding(0) var<uniform> params : ComputeParams;
@group(0) @binding(1) var outputTex: texture_storage_2d_array<rgba8unorm,write>;
First we declare the inputs we will use in this shader: as described before, we have a uniform struct "params" providing the texture size and destination origin point.
And as second binding we have the output texture 2D array, which is the full texture atlas, and we should write our computation results into a sub-region of that texture.
Next, I started with this version of the main function:
@compute @workgroup_size(16, 16, 1)
fn main(@builtin(global_invocation_id) id: vec3u) {
if id.x > params.textureSize.x || id.y > params.textureSize.y {
// Nothing to write in this case.
return;
}
// Write yellow color:
let color = vec4f(1.0, 1.0, 0.0, 1.0);
let layer: u32 = params.origin.z;
textureStore(outputTex, vec2i(params.origin.xy + id.xy), layer, color);
}
- As already mentioned, we use a workgroup size of 16x16x1
- We check the texture bounds [very important here as we may have other texture data after those bounds in the atlas and we should not override that]
- And for now we simply write a uniform yellow color for all pixels.
If we run the TerrainView app with this scene setup, we get this initial result (which is just as expected ๐):
Then if I simply change the color to red in the shader and save the file
let color = vec4f(1.0, 0.0, 0.0, 1.0);
I immediately get the quad display updated to red, which means the compute shader is indeed working properly and live reload feature is also functional:
As a final test, we can change the color opacity to 50%:
let color = vec4f(1.0, 0.0, 0.0, 0.5);
And we will get this result:
So this also seems to work fine, (even if there is some artefact due to the interaction with the ocean post processing shader, but that's something I will fix later on so no worries ;-)).
And this concludes the initial setup of a render surface we needed for our experiment and now we can focus on the Voronoi generation process itself.
A little bit of theory
Yet, before we really get started, we need a little bit of theory on how we will proceed here:
The basic idea when generating Voronoi cells, is that for each pixel/location that we consider in a 2D area, we need to find the closest "reference point" to that pixel.
And then, we can declare that the pixel we are considering belongs to the cell of that specific "reference point" (which is in fact what we can also call the "cell center") and maybe give it a specific color or store its distance to the cell center.
Generally we are given a set of random "reference points" that can move anywhere, and thus it's pretty tricky to follow them all, and for each update cycle and each pixel you would need to reconsider all of them to find the closest.
Yet, we have a way to cheat a bit on this problem as follow:
First we will create a grid on top of all the pixels that we want to consider:
And then, for each of this grid cell we will generate a single reference point at a random location but inside the specific grid cell.
After that, whenever we consider a given pixel, the problem of finding the closest reference point is simplified: we only need to consider the reference points from the cell containing this pixel and all the direct cell neighbors, so that's only 9 reference points to check per pixel!
Now, let's see how we could apply that in our code:
First, we can compute the normalized UV coords on our render quad as follow:
// Get the normalized UV coords from the pixel coords and texture size:
let uv = vec2f(id.xy) / vec2f(params.textureSize.xy - 1u);
// Write the UVs as color:
let color = vec4f(uv, 0.0, 1.0);
And that will give us this display result:
Which is exactly what we could expect:
- The origin of the texture is at the bottom left corner (0,0)
- Then when we increment the U coordinate (ie. X/horizontal direction) we get more and more red.
- And when we increment the V coordinate (ie. Y/Vertical direction) we get more and more green.
But in fact, if we want to keep the grid cell square, it's easier to work directly with the texture size in pixels and think in terms of "number of pixels" per grid cell. So we can rather use this code:
// Number of pixels on each dimension to form a grid cell:
let gridSize: f32 = 64.0;
// Get our coordinates on the grid:
let gcoords = vec2f(id.xy) / gridSize;
// From those coords we can extract the integer part which
// are the cell coords, and the fractional part which are our pixel
// local (normalized) coords on the cell:
let cellCoords = vec2i(floor(gcoords) + 0.5);
let localCoords = gcoords - floor(gcoords);
// Write the localCoords as color:
let color = vec4f(localCoords, 0.0, 1.0);
And this will give us this visual result:
From there I started to think it would be cool to be able to dynamically manipulate the gridSize
parameter when building this shader, so why
not add a dedicated control panel for it ?
Adding support for StructuredBuffer creation in config
To introduce this dynamic control support, I first added support to configure and share some WebGPU buffer from my scene config file:
buffer0:
type: StructuredBufferBlueprint
usage: Uniform | CopyDst
elements:
- name: TextureSize
type: Vec3u
- name: GridSize
type: F32
value: 64.0
- name: Origin
type: Vec3u
Then I ensured I could get access to/update and use that buffer instead of creating a new one in the ComputeSurface construction process:
// Instead of creating a buffer here we can retrieve the structured buffer
// from the engine:
auto& stbuf =
WGPUEngine::instance()->structured_buffers().get(_uniformBufferId);
stbuf.set_element(SID("TextureSize"),
Vec3u(_textureWidth, _textureHeight, 1));
stbuf.set_element(SID("Origin"),
Vec3u(area.rect.xmin, area.rect.ymin, area.layer));
stbuf.update();
// struct Params {
// Vec3u textureSize;
// U32 pad0{0};
// Vec3u origin;
// U32 pad1{0};
// };
// UniformBuffer<Params> params{
// Params{.textureSize = Vec3u(_textureWidth, _textureHeight, 1),
// .origin = Vec3u(area.rect.xmin, area.rect.ymin, area.layer)}};
auto& tex = bsp.get_texture();
auto bgrp0 = cnode->setup_bind_grp({
// params.as_ubo(),
stbuf.buffer().as_ubo(),
stoTex2dArray(tex),
});
And finally I just had to update the ComputeParams
struct in the shader file to also provide the gridSize from there:
struct ComputeParams {
// Size of the texture to generate
textureSize: vec3u,
// Number of pixels on each dimension to form a grid cell:
gridSize: f32,
// Origin point of the texture data in the output texture storage
origin: vec3u,
padding1: u32,
};
And in the main function we can now just use that parameter instead of the hardcoded value:
// Get our coordinates on the grid:
let gcoords = vec2f(id.xy) / params.gridSize;
With this first part of the update, if I change the default value for the "gridSize" element to 256.0 for instance in the config file I get this reflected into the shader as expected on the next start:
Building control panel for buffer elements
After adding the support for structured buffers definition in the config, I then further extended the configuration to also support the description of a ControlPanel as follow:
voronoi_panel:
type: ControlPanelBlueprint
icon: icons/settings-sliders.png
title: Voronoi Settings
elements:
- type: SliderRow
caption: Grid Size:
text_precision: 2
use_log_scale: false
min_value: 1.0
max_value: 512.0
target:
type: StructuredBuffer
name: buffer0
element: GridSize
In this element, we specify the ControlPanelBlueprint type, and then we build the panel with a natural flow:
- Indicating the icon that should be used for the visibility toggling button of this panel
- Then we provide the title to show on the panel
- And then a list of elements to display on that panel.
- For now we only have 1 element, which is a SliderRow type, allowing us to modify the value of the "GridSize" element of the buffer0 we created before.
With this new component when I run the app I can now dynamically control the GridSize value and the changes are immediately reflected in the shader ๐ฅณ:
Adding support for grid display
Okay, now, before moving to the generation the reference points I think we need to improve a bit on the default display we have:
Displaying the local cell UV coords is a bit confusing, and instead I would like to display a simple white background for each cell with black borders around them.
So let's implement this!
First I add a new entry in the buffer0 config to control the border size in pixels:
buffer0:
type: StructuredBufferBlueprint
usage: Uniform | CopyDst
elements:
- name: TextureSize
type: Vec3u
- name: GridSize
type: F32
value: 64.0
- name: Origin
type: Vec3u
- name: BorderSize
type: F32
value: 0.0
Next I add another line to control this value from the "Voronoi Settings" panel:
voronoi_panel:
type: ControlPanelBlueprint
icon: icons/settings-sliders.png
title: Voronoi Settings
elements:
- type: SliderRow
caption: "Grid Size:"
text_precision: 2
use_log_scale: false
min_value: 2.0
max_value: 512.0
target:
type: StructuredBuffer
name: buffer0
element: GridSize
- type: SliderRow
caption: "Border Size:"
text_precision: 0
use_log_scale: false
min_value: 0.0
max_value: 8.0
target:
type: StructuredBuffer
name: buffer0
element: BorderSize
Then I update ComputeParams
struct in our shader to reflect the changes we just made to the buffer:
struct ComputeParams {
// Size of the texture to generate
textureSize: vec3u,
// Number of pixels on each dimension to form a grid cell:
gridSize: f32,
// Origin point of the texture data in the output texture storage
origin: vec3u,
// Border size for the grid display in number of pixels
borderSize: f32,
};
And finally we use this new parameter to draw black borders on a white background when the provided value is not 0.0:
// Write the localCoords as color:
var color = vec4f(localCoords, 0.0, 1.0);
// Add the border if specified:
// Compute the "normalized border size" (as borderSize is in pixels):
let nbsize = params.borderSize / params.gridSize;
if nbsize > 0.0 {
let isBorder = localCoords.x < nbsize || localCoords.x > (1.0 - nbsize) || localCoords.y < nbsize || localCoords.y > (1.0 - nbsize);
let white = vec4f(1.0, 1.0, 1.0, 1.0);
let black = vec4f(0.0, 0.0, 0.0, 1.0);
color = select(white, black, isBorder);
}
With those changes here is the result we get:
Added the reference points
And now, as promised, it's finally time to generate the reference points for each cell.
We will need one pseudo-random but fixed location in each cell, which can be easily achieved using an hash function applied for instance the coordinates of the bottom left corner of each cell as input.
Personally I was already using this hash function in other parts of my code to go from 3D integer to 1D noise:
fn hash(p: vec3i) -> f32 {
// Better coefficients for 3D -> 1D
// cf. https://www.reedbeta.com/blog/hash-functions-for-gpu-rendering/
var n = p.x * 1597 + p.y * 3571 + p.z * 7919;
var state: u32 = u32(n) * 747796405u + 2891336453u;
state = ((state >> ((state >> 28u) + 4u)) ^ state) * 277803737u;
state = (state >> 22u) ^ state;
return -1.0 + 2.0 * f32(state) / f32(0xffffffffu);
}
And extending that a bit, here is the version Claude came up with for our 2D needs:
fn hash(p: vec2i) -> vec2f {
// Generate two different hash values from the 2D input
// Using different prime coefficients for each component
var n1 = p.x * 1597 + p.y * 3571;
var n2 = p.x * 2179 + p.y * 5231;
// First hash component
var state1: u32 = u32(n1) * 747796405u + 2891336453u;
state1 = ((state1 >> ((state1 >> 28u) + 4u)) ^ state1) * 277803737u;
state1 = (state1 >> 22u) ^ state1;
// Second hash component with different constants
var state2: u32 = u32(n2) * 1103515245u + 12345u;
state2 = ((state2 >> ((state2 >> 28u) + 4u)) ^ state2) * 277803737u;
state2 = (state2 >> 22u) ^ state2;
// Convert to [-1, 1] range for both components
var x = -1.0 + 2.0 * f32(state1) / f32(0xffffffffu);
var y = -1.0 + 2.0 * f32(state2) / f32(0xffffffffu);
return vec2f(x, y);
}
So we will use that to compute the reference point in the current grid cell:
let cellCoords = vec2i(floor(gcoords) + 0.5);
let localCoords = gcoords - floor(gcoords);
// Hash the cellCoords to get the refPoint coords
// Normalize the output to [0,1] instead of [-1,1]
let refPointCoords = hash(cellCoords) * 0.5 + 0.5;
And just to give it a quick check we will add some debug code to display those reference points in our grid:
// Square Radius in normalized coords space:
let pointRadiusSqr = params.refPointSize * params.refPointSize;
// Check the distance to the current point
// and display the refPoint in each cell in red:
let dir = localCoords - refPointCoords;
let d2 = dot(dir, dir);
let red = vec4f(1.0, 0.0, 0.0, 1.0);
color = select(color, red, d2 < pointRadiusSqr);
Which will give us this result:
- We can notice here that some points are clipped at the border of its cell,
- But that's expected for this simple debug display since the reference point will be different when crossing the borders,
- Ah, yes, and I also added a parameter to control the radius of the ref point for the debug display here ๐.
Finding the closest ref point
Now we need to find the closest reference point for any given pixel using the 9 closest cells as explained above.
As in the base shader toy example, we will refactor our code to have a dedicated function to find this:
struct RefPointInfos {
distance: f32,
gridPos: vec2f,
};
fn find_closest_ref_point(gridCoords: vec2f) -> RefPointInfos {
// Start with a very large distance:
var res = RefPointInfos(10.0, vec2f(0.0));
// From those coords we can extract the integer part which
// are the cell coords, and the fractional part which are our pixel
// local (normalized) coords on the cell:
let cellCoords = vec2i(floor(gridCoords) + 0.5);
let localCoords = gridCoords - floor(gridCoords);
for (var j = -1; j <= 1; j++) {
for (var i = -1; i <= 1; i++) {
// Get the coord offset to use from our current cell:
// This "offset" is what we need to add to our current cell
// coords to get the sibling cells coords.
// Note: this was called "g" in the original implementation.
// Note2: we use i32 values directly ourselves.
let cellOffset = vec2i(i, j);
// hash the sibling cell coords to get its reference point
// local coords (renormalizing to [0,1] instead of [-1,1])
// Note: this was called o in the original implementation
let refPointLocalCoords = hash(cellCoords + cellOffset) * 0.5 + 0.5;
// Now we need to get the vector from our current pixel location
// to that reference point. We'll use our current cell origin
// as "frame origin". So in there the sibling ref point coords
// are:
let refCoords = vec2f(cellOffset) + refPointLocalCoords;
// So now the "r" vector is:
let r = refCoords - localCoords;
// Compute the squared distance:
// Note: just called "d" in the original implementation:
let d2 = dot(r, r);
// And compare to the current best distance:
// Note: "distance" field actually contains squared distance
// for now.
if d2 < res.distance {
// Replace the best solution:
res.distance = d2;
// Get the global grid coords of the ref point:
res.gridPos = vec2f(cellCoords) + refCoords;
}
}
}
// Turn squared distance to distance:
res.distance = sqrt(res.distance);
return res;
}
This function is a bit long because I added some comments in the code, but the underlying logic is pretty simple in the end.
- We prepare an output "res" object with a very large distance to ensure we will find a closer solution,
- Then we split the grid coordinates into the cell coordinates part and the local coordinates,
- Then we iterate on each sibling cell,
- Getting the coordinates of the reference point for each of them,
- And computing the squared distance between that reference point and our current grid coordinates.
- Updating the best solution if needed,
- Finally, we return the distance and location of the closest reference point found.
And then we can use this code to get a specific cell color for each pixel and thus display our voronoi diagram:
// Find the closest ref point for our grid coords:
let rp = find_closest_ref_point(gcoords);
// Generate a color based on the selected refPoint grid position:
// But avoid floating point changes here on a single cell:
let coords = floor(rp.gridPos * 100.0);
var rgb = hash3(coords);
let d = rp.distance;
// Darken a bit the color the further we get from the
// reference point (ie. cell center):
rgb *= clamp(1.0 - 0.4 * d * d, 0.0, 1.0);
// Display a black dot instead if we are very close
// to the reference point:
rgb *= smoothstep(0.08, 0.09, d);
let color = vec4f(rgb, 1.0);
- We get the closest reference point,
- Then we get a fixed rgb color per cell using the fixed local position of the reference point.
- Note that in the code above i'm also using an additional helper function to generate a random color from a vec2f input:
fn hash3(p: vec2f) -> vec3f {
// Simple hash using sin/cos with large multipliers
// The large numbers help break up patterns
var r = sin(p.x * 12.9898 + p.y * 78.233) * 43758.5453;
var g = sin(p.x * 93.9898 + p.y * 67.345) * 27183.1592;
var b = sin(p.x * 269.5 + p.y * 183.3) * 51829.6737;
// Take fractional part and ensure [0, 1] range
return vec3f(abs(fract(r)), abs(fract(g)), abs(fract(b)));
}
This just uses sinus so it is not as good as the PCG hash function we use for the reference points themself,
but since this is only for coloring the cells it will be good enough here.Also, we could stop with just the computation of the rgb value to get a proper voronoi diagram display
But like in the Inigo Quilez example, we can also then apply a simple darkening radial gradient effect to get some "non flat" effect on each cell.
And we can display the cell centers as black dots using this statement
rgb *= smoothstep(0.08, 0.09, d);
Adding time animation
Okay, so this is looking nice already but there is still a critical part missing which is the time animation of the diagram! So let's now add this.
First, I add access to my camera specific UniformBuffer when building the compute node in ComputeSurfaceBlueprint
as this will give me a time value I need:
auto bgrp0 = cnode->setup_bind_grp({
// params.as_ubo(),
stbuf.buffer().as_ubo(),
scene.get_camera().as_ubo(),
stoTex2dArray(tex),
});
Then I can reflect this change on this shader part:
@group(0) @binding(0) var<uniform> params : ComputeParams;
@group(0) @binding(1) var<uniform> cam : StdCameraUBO;
@group(0) @binding(2) var outputTex: texture_storage_2d_array<rgba8unorm,write>;
And for reference, here is the content I have for the StdCameraUBO structure:
struct StdCameraUBO {
projMatrix: mat4x4f,
viewMatrix: mat4x4f,
revZProjMatrix: mat4x4f,
invViewMatrix: mat4x4f,
invProjMatrix: mat4x4f,
invRevZProjMatrix: mat4x4f,
camWorldPos: vec4f,
winSize: vec2i,
near_plane: f32,
far_plane: f32,
c_log_factor: f32,
f_log_coef: f32,
fovy: f32,
fovx: f32,
time: f32,
};
=> here you can see that we will provide the time value as a f32 in the last member of the struct.
And finally we use this time parameter to compute a dynamic movedRefCoords
vec2f value:
let refCoords = vec2f(cellOffset) + refPointLocalCoords;
// Now with time animation as in the original Inigo Quilez version:
let movedRefCoords = vec2f(cellOffset) + (0.5 + 0.5 * sin(cam.time + 6.2831 * refPointLocalCoords));
// So now the "r" vector is:
let r = movedRefCoords - localCoords;
- Note that here we replace the
refPointLocalCoords
with(0.5 + 0.5 * sin(cam.time + 6.2831 * refPointLocalCoords))
- This value depends on the current time and the static refPointLocalCoords,
- And the output of this will still be in the range [0,1], so our dynamic reference points stays on the grid cell as required.
And here we are! With this simple time parameter addition, we now get our voronoi diagram to animate and change continuously as shown in the first overview video!
Limitations and future work
Now, this animation mechanism is not perfect: if you pay close attention to the movements, you will notice that the cell center are always moving following the same path.
In many cases I don't think this would really be a problem, but still there would be different ways to improve on this point I think:
- First we could use a more complex animation function instead of simply
sin(time +offset)
- Or, more interestingly, one could imagine generating 2 or more reference points per cell to get more random elements to experiment with
- Or even better, one could consider 2 grids instead of just one, with a different grid size for the second one maybe and a random continuous translation of that second grid with respect to the first one ??
But anyway, this will be good enough for a first version, so let's just stop here for this time.
As usual, don't forget to check out the shader files on the NervLand Adventures repository if you want to dig deeper into this, and if you have any questions, don't hesitate to leave a comment below!
See you next time ๐! Bye!
Top comments (0)