Most WebGPU tutorials go deep into theory or jump straight into 3D engines. If you are a frontend developer like me, that feels like learning to drive by reading the car manual.
This article is the opposite. We will build something visual, step by step, with zero GPU knowledge. By the end, you will have a real-time animated gradient running on your GPU — and you will understand every line.
What is WebGPU (in 30 seconds)
WebGPU lets your JavaScript talk directly to the GPU. Think about it like this:
Imagine you need to paint a wall. Your CPU is one very talented painter — fast, smart, and precise. But it paints one brushstroke at a time.
Your GPU is a team of 5,000 painters. Each one can only do simple strokes, but they all paint at the same time. Give them a wall with 2 million pixels? They split the work and finish in one pass.
WebGPU is the phone call that lets your browser hire that team of painters.
Browser support (March 2026)
Before we start — can your browser do this?
| Browser | Status |
|---|---|
| Chrome/Edge | ✅ Supported (since v113) |
| Firefox | ✅ Supported (since v141) |
| Safari | ✅ Supported (since 18.2) |
Good news: all modern browsers support it now. For production, always check first:
if (!navigator.gpu) {
alert("WebGPU is not supported in your browser");
}
What we are building
A full-screen animated gradient where the GPU decides the color of every pixel, 60 times per second. It looks like a lava lamp made of math.
No libraries. No frameworks. Just HTML, JavaScript, and a few lines of GPU code.
Step 1: The HTML (simple but necessary)
<!DOCTYPE html>
<html>
<head>
<title>My First WebGPU Project</title>
<style>
body { margin: 0; overflow: hidden; }
canvas { display: block; width: 100vw; height: 100vh; }
</style>
</head>
<body>
<canvas id="canvas"></canvas>
<script src="main.js"></script>
</body>
</html>
Nothing new here — just a full-screen canvas. This is our "empty wall" that the GPU painters will fill.
Step 2: Connect to the GPU
Before the painters can work, you need to call the agency, confirm they are available, and give them the address. That is what this step does.
// main.js
async function init() {
// 1. Check support
if (!navigator.gpu) {
throw new Error("WebGPU not supported");
}
// 2. Get an adapter (your physical GPU)
const adapter = await navigator.gpu.requestAdapter();
if (!adapter) {
throw new Error("No GPU adapter found");
}
// 3. Get a device (your logical connection to it)
const device = await adapter.requestDevice();
// 4. Connect the canvas
const canvas = document.getElementById("canvas");
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
const context = canvas.getContext("webgpu");
const format = navigator.gpu.getPreferredCanvasFormat();
context.configure({ device, format });
return { device, context, format, canvas };
}
What just happened — using our painting analogy:
-
adapter= finding the painting agency (your physical GPU) -
device= signing the contract with them (your app's private connection) -
context= giving them the key to the room where the wall is (connecting canvas to GPU)
This setup code is the same for every WebGPU project. Write it once, copy it forever.
Step 3: Write a shader (the fun part)
Now we need to give instructions to our painters. In GPU world, these instructions are called shaders — small programs written in WGSL (WebGPU Shading Language). The syntax looks like Rust, but simpler.
Think of a shader like a recipe card you give to each painter. Every painter gets the same recipe, but they each know their own position on the wall. So painter #4500 reads: "I am at position (200, 300). Based on this recipe, my color should be blue." All 5,000 painters follow the recipe at the same time, and the wall is painted in one pass.
For our gradient, we need a fragment shader — a recipe that takes a pixel position and returns a color.
const shaderCode = /* wgsl */`
// A uniform is data we send from JavaScript to the GPU
@group(0) @binding(0) var<uniform> time: f32;
@group(0) @binding(1) var<uniform> resolution: vec2f;
// Vertex shader: positions a full-screen triangle
@vertex
fn vertexMain(@builtin(vertex_index) i: u32) -> @builtin(position) vec4f {
// A clever trick: 3 vertices that cover the entire screen
let pos = array<vec2f, 3>(
vec2f(-1.0, -1.0),
vec2f( 3.0, -1.0),
vec2f(-1.0, 3.0)
);
return vec4f(pos[i], 0.0, 1.0);
}
// Fragment shader: runs once PER PIXEL
@fragment
fn fragmentMain(@builtin(position) pos: vec4f) -> @location(0) vec4f {
// Normalize coordinates to 0.0 - 1.0
let uv = pos.xy / resolution;
// Animated color channels using sine waves
let r = sin(uv.x * 3.0 + time) * 0.5 + 0.5;
let g = sin(uv.y * 3.0 + time * 0.7) * 0.5 + 0.5;
let b = sin((uv.x + uv.y) * 2.0 + time * 1.3) * 0.5 + 0.5;
return vec4f(r, g, b, 1.0);
}
`;
Let me explain each part:
-
@vertex fn vertexMain— This draws a giant triangle that covers the entire screen. Think of it as stretching a canvas over the wall. It is a standard trick you will reuse in every full-screen effect. -
@fragment fn fragmentMain— This is the recipe card. It runs once for every pixel. Each pixel receives its position and returns a color. -
sin()withtime— Thesinfunction creates smooth waves. Addingtimemakes them move. Different speeds per color channel (red, green, blue) create that organic, shifting look. -
uv— The pixel position, but normalized to a range of 0 to 1. Like a percentage: top-left is (0, 0), bottom-right is (1, 1). This way, the shader works on any screen size.
If you have used CSS gradients before, the idea is similar — but this one updates 60 times per second and you control everything with math.
Step 4: Set up the pipeline
We have the painters (GPU), the wall (canvas), and the recipe (shader). Now we need to put everything together — like a manager organizing the work before it starts. That is what the pipeline does.
It tells the GPU: "Here is the recipe, and here is the data you will need."
function createPipeline(device, format, shaderCode) {
const shaderModule = device.createShaderModule({ code: shaderCode });
// Create buffers for our uniform data (time + resolution)
const timeBuffer = device.createBuffer({
size: 4, // f32 = 4 bytes
usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST,
});
const resolutionBuffer = device.createBuffer({
size: 8, // vec2f = 2 × 4 bytes
usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST,
});
// Bind group layout: tells the GPU what data to expect
const bindGroupLayout = device.createBindGroupLayout({
entries: [
{ binding: 0, visibility: GPUShaderStage.FRAGMENT, buffer: { type: "uniform" } },
{ binding: 1, visibility: GPUShaderStage.FRAGMENT, buffer: { type: "uniform" } },
],
});
// Bind group: connects our buffers to the shader's @binding slots
const bindGroup = device.createBindGroup({
layout: bindGroupLayout,
entries: [
{ binding: 0, resource: { buffer: timeBuffer } },
{ binding: 1, resource: { buffer: resolutionBuffer } },
],
});
// The render pipeline
const pipeline = device.createRenderPipeline({
layout: device.createPipelineLayout({
bindGroupLayouts: [bindGroupLayout],
}),
vertex: {
module: shaderModule,
entryPoint: "vertexMain",
},
fragment: {
module: shaderModule,
entryPoint: "fragmentMain",
targets: [{ format }],
},
});
return { pipeline, bindGroup, timeBuffer, resolutionBuffer };
}
Yes, this is a lot of code. That is the most common complaint about WebGPU. But if you look closely, it is just connecting things — like plugging cables into the right ports:
- Compile the recipe →
shaderModule - Create two small envelopes (
buffers) to pass data to the painters: current time and screen size - Label the envelopes so the painters know which is which (
bindGroup) - Put it all together in one package (
pipeline)
The good news? This wiring is almost the same for every project. The creative part is always the shader.
Step 5: The render loop
Time to paint. But we do not paint once — we paint 60 times per second. Each frame, we update the time (so the colors move), tell the GPU to run the recipe, and repeat. It is like a flipbook: each page is slightly different, and together they create animation.
function startRenderLoop(device, context, pipeline, bindGroup, timeBuffer, resolutionBuffer, canvas) {
// Send resolution to GPU (only needs to happen once, unless resized)
device.queue.writeBuffer(
resolutionBuffer, 0,
new Float32Array([canvas.width, canvas.height])
);
function frame(timestamp) {
// Update time (convert ms to seconds)
device.queue.writeBuffer(
timeBuffer, 0,
new Float32Array([timestamp / 1000])
);
// Create the render command
const encoder = device.createCommandEncoder();
const pass = encoder.beginRenderPass({
colorAttachments: [{
view: context.getCurrentTexture().createView(),
loadOp: "clear",
storeOp: "store",
}],
});
pass.setPipeline(pipeline);
pass.setBindGroup(0, bindGroup);
pass.draw(3); // 3 vertices = our full-screen triangle
pass.end();
// Submit to GPU
device.queue.submit([encoder.finish()]);
requestAnimationFrame(frame);
}
requestAnimationFrame(frame);
}
If you have used requestAnimationFrame before for Canvas 2D animations, this is the same idea. The only difference: instead of drawing with ctx.fillRect(), you write a command list and send it to the GPU. Think of commandEncoder as writing a to-do list, and device.queue.submit() as handing that list to the painters.
Step 6: Wire it all together
async function main() {
const { device, context, format, canvas } = await init();
const { pipeline, bindGroup, timeBuffer, resolutionBuffer } =
createPipeline(device, format, shaderCode);
startRenderLoop(device, context, pipeline, bindGroup, timeBuffer, resolutionBuffer, canvas);
}
main();
Open index.html in your browser. You should see a smooth, colorful gradient filling your screen — every single pixel computed by the GPU in real time. All 5,000 painters working together.
Make it your own
The shader is your playground. You can change the recipe and the painters will follow. Try replacing the color math inside fragmentMain:
Circular waves:
let dist = distance(uv, vec2f(0.5, 0.5));
let wave = sin(dist * 20.0 - time * 3.0) * 0.5 + 0.5;
return vec4f(wave, wave * 0.5, 1.0 - wave, 1.0);
Checkerboard morph:
let scale = 10.0;
let checker = step(0.5, fract(uv.x * scale + sin(time))) *
step(0.5, fract(uv.y * scale + cos(time)));
return vec4f(checker, 1.0 - checker, 0.5, 1.0);
Noise-like pattern:
let n = fract(sin(dot(uv + time * 0.1, vec2f(12.9898, 78.233))) * 43758.5453);
return vec4f(n * 0.8, n * 0.4, n, 1.0);
Change one number, save, refresh. You will see the result immediately. This is the best way to learn — experiment and break things.
What you just learned
Here is a cheat sheet. Every WebGPU concept mapped to something you already know as a frontend developer:
| WebGPU concept | Analogy |
|---|---|
adapter |
Finding which painting agency is available |
device |
Signing a contract with them |
shaderModule |
The recipe card you give to every painter |
buffer |
An envelope with data (time, screen size) for the painters |
bindGroup |
Labels on the envelopes so painters know which is which — like props in a component |
pipeline |
The full work plan: recipe + materials + instructions |
commandEncoder |
A to-do list you write before handing it to the painters |
device.queue.submit() |
Handing the to-do list and saying "go" |
requestAnimationFrame |
Same as always — the flipbook loop |
When should you use WebGPU?
After building this, you might want to use the GPU for everything. Do not do that.
The GPU is like a factory: amazing for mass production, terrible for custom one-off work. Use the right tool for the job.
Use WebGPU when:
- Processing thousands of pixels, particles, or data points at once
- Running ML models in the browser
- Building visualizations with 100k+ data points
- Real-time image or video filters
Stick with JavaScript when:
- DOM manipulation
- Business logic and API calls
- Anything that is not the same operation repeated thousands of times
Where to go from here
You now have the foundation. The setup code stays the same — from here, the fun is changing the recipe (shader). Some ideas:
- Add mouse interaction — send mouse coordinates as another uniform, so the gradient reacts to your cursor
- Compute shaders — use the GPU to process data, not just pixels (great for simulations)
- Particle systems — combine vertex and fragment shaders to move thousands of objects
- WebGPU Samples — official examples with more advanced patterns
Questions? Leave a comment — I will answer every one. If you built something cool by changing the shader, I would love to see it.
Top comments (0)