DEV Community

Cover image for Why Are We Still Doing GPU Work in JavaScript? (Live WebGPU Benchmark & Demo🚀)
Sylwia Laskowska
Sylwia Laskowska

Posted on

Why Are We Still Doing GPU Work in JavaScript? (Live WebGPU Benchmark & Demo🚀)

JavaScript has been the main language of the web for years. Its popularity probably surprised even its creator, Brendan Eich, who famously built the first version of the language in just about a week.

One of the reasons JavaScript became so dominant is the sheer power of the browser. From a single application — the browser — we can access hundreds of millions of websites and applications. No need to download or install anything. That's huge part of the success.

And browser vendors have been working incredibly hard to make JavaScript faster and faster. Modern engines are extremely optimized. But there is still one fundamental limitation:

JavaScript runs on the CPU.

So what about tasks that are perfect for the GPU?

This is where WebGPU enters the scene. 🚀
Let’s take a look at what it can actually do — and when it really makes sense to use it.

By the way — my jsDay talk is getting closer and closer! I’ll be speaking there about **WebGPU + WebAssembly, which is exactly the kind of things you see in this article: GPUs in the browser, compute shaders, and pushing the web a bit further than usual.

To celebrate that (and maybe calm my pre-conference nerves a little 😅), I recorded a short promo reel for the talk which you can find here

If you feel like watching it and dropping a like, I’d really appreciate it.
And if you’re coming to **jsDay, come say hi after the talk! 🙂


A Quick Note Before We Start

I already wrote about WebGPU in another article, so I won’t repeat the full introduction here:

Why WebGPU Feels Like the Future of the Web (Live Demo)

But let’s briefly recap one important thing.

WebGPU is not just about graphics — although it does that beautifully.
It also gives us something extremely powerful:

access to GPU compute.


CPU vs GPU (Quick Reminder)

This will be obvious to most of you, but let's make a quick distinction for beginners and those who slept through their first semester of college:

CPUs are great at doing a few complex things one after another.
In the browser, that usually means JavaScript or WebAssembly.

GPUs are great at doing simple things massively in parallel.
And in the browser, the API that lets us use them is WebGPU.

That’s why GPUs are so good at tasks where the same operation needs to be repeated thousands or millions of times.


I Wanted to Test It Myself

If you read my posts regularly, you probably know I don’t like taking things on faith. I prefer to try myself. 🙂

So I built a small application that benchmarks JavaScript vs WebGPU.

These are not super-academic benchmarks where the exact same algorithm is implemented line-by-line in different systems. I probably wouldn't have gotten a PhD from MIT thanks to them. 😅

Instead, I tried something more practical:

I tested how both technologies behave when they solve the same problem in a way that is natural for them, without intentionally favoring either one.

You can explore everything here:

GitHub repo

https://github.com/sylwia-lask/webgpu-bench

Live demo

https://sylwia-lask.github.io/webgpu-bench/

Feel free to play with it yourself. 😄


Scenario 1 — Particle Simulation

The first test was a particle simulation.

If you read about WebGPU online — or ask ChatGPT — this is usually presented as a classic example of GPU superiority.

Each particle has two properties:

  • position (x, y)
  • velocity (vx, vy)

Every frame we update it like this:

x = x + vx
y = y + vy
Enter fullscreen mode Exit fullscreen mode

And if the particle hits the screen border, we reverse the velocity to simulate a bounce.

Pseudo-code:

for each particle:
    pos += vel

    if pos.x < 0 or pos.x > width:
        vel.x = -vel.x

    if pos.y < 0 or pos.y > height:
        vel.y = -vel.y
Enter fullscreen mode Exit fullscreen mode

So the compute shader effectively performs something like:

pos += vel
Enter fullscreen mode Exit fullscreen mode

That’s basically two float additions per particle (plus a bounce check).


The Result

Surprisingly… there was almost no difference between the JavaScript and WebGPU implementations. Both versions produced very similar FPS.

JS vs WebGPU benchmark: particles. In this case there is almost no difference between JS and GPU version.

Meanwhile, the WebGPU version required much more boilerplate code.

Why does that happen?

1️⃣ The algorithm is extremely simple

The particle update does only 2–4 floating point operations per thread.

GPUs really shine when the work is compute-heavy, not when it’s this lightweight.


2️⃣ Canvas 2D also ends up on the GPU

This is something many frontend developers don’t realize.

Even when you use Canvas 2D, the browser often renders it using GPU acceleration.

Browsers like Chrome or Edge internally use systems like Skia or Dawn, which eventually issue draw calls to the GPU.

So in practice:

  • WebGPU → you talk directly to the GPU
  • Canvas 2D → the browser talks to the GPU for you

And the browser is very well optimized for things like fillRect().

So the CPU version isn’t as “CPU-only” as people often think.


Could GPU Win Here?

Probably yes — but only if we made the simulation more complex.

For example, something like n-body gravity, where every particle attracts every other particle. That would dramatically increase the amount of math.

But honestly… I was too lazy to implement that. 😅


Scenario 2 — Matrix Multiplication

Now let’s look at something GPUs absolutely love.

Matrix multiplication.

Despite the scary name, the idea is simple. Imagine two grids of numbers. To compute one cell in the result matrix:

  • we take one row from the first matrix
  • one column from the second matrix
  • multiply numbers pairwise
  • add the results together

Example:

[1 2]     [5 6]
[3 4]  ×  [7 8]
Enter fullscreen mode Exit fullscreen mode

To compute the top-left cell:

1×5 + 2×7 = 19
Enter fullscreen mode Exit fullscreen mode

And this operation must be repeated for every cell in the result matrix.

For large matrices, that quickly becomes millions of multiplications.

Which is exactly the kind of workload GPUs were designed for.


The Result

Here the result was very clear.

WebGPU absolutely crushes JavaScript.

JS vs WebGPU benchmark: matrix multiplication. WebGPU smashes JavaScript, it is couple of times faster.

And the larger the matrices get, the bigger the difference becomes.

This makes perfect sense:

Matrix multiplication is essentially the same simple operation repeated thousands or millions of times — the exact scenario where GPUs shine.

And let's remember it's one of the most important operations in computer science. Matrix multiplication is heavily used in:

  • computer graphics
  • physics simulations
  • scientific computing
  • and of course… our beloved LLMs 🤖

Scenario 3 — Image Processing Pipeline

The third benchmark tested something closer to traditional graphics work: an image processing pipeline.

Here the GPU once again completely dominates the CPU implementation.

JS vs WebGPU benchmark: image pipeline. WebGPU smashes JavaScript, it is couple of times faster.

This kind of workload is very natural for GPUs:

  • every pixel can be processed independently
  • the same operation is applied to thousands or millions of pixels

Which again fits the GPU execution model perfectly.


So Should We Replace JavaScript With WebGPU?

Of course not. 🙂

WebGPU is powerful — but it only makes sense for certain types of problems. In general, WebGPU shines when you need to:

  • perform many simple operations
  • on large amounts of data
  • in parallel

For regular application logic, JavaScript remains the perfect tool.


There Are Still Some Practical Limitations

WebGPU is also still a relatively young technology.

If you control the environment and can require users to run modern browsers, you can absolutely start experimenting with it.

But if you’re building something for a wide audience — where someone might open your app in a strange Android browser from 2018 — you should probably be careful.

Or implement a fallback, for example using WebGL.


The Boilerplate Problem

If you check the repository, you’ll notice that WebGPU requires quite a bit of setup.

You need to:

  • request an adapter
  • request a device
  • create buffers
  • configure pipelines
  • manage command encoders
  • and so on.

There’s a lot of boilerplate.

Yes, coding agents like Claude or ChatGPT can help with this.

But here’s a small warning ⚠️

WebGPU is still new, and LLMs are not always great at generating correct WebGPU code. Sometimes you will still need to go back to the classic developer workflow:

  • reading documentation
  • browsing GitHub issues
  • debugging things manually

Just like in the good old days. 😄


Final Thoughts

The question is no longer whether WebGPU will become important.

The real question is how soon we will need it.

Because WebGPU is essentially a new, modern standard for working with GPUs in the browser.

And for the kinds of problems GPUs were designed to solve — it can be incredibly powerful. 🚀

Top comments (79)

Collapse
 
pengeszikra profile image
Peter Vivo

I think the root problem is our programming language don't give help to GPU programming paralell with a multi core CPU. So I think the best solution will be creating a programming language based on that root.

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

That's an interesting idea, and it makes me curious whether anyone is actively working on something like that.

When WebAssembly became a standard, there was a lot of talk about how JavaScript would soon be replaced. But that didn’t really happen. Instead, JavaScript kept getting faster, and WASM mostly found its niche in heavy computations, rather than replacing JS for everyday application logic.

WebGPU is also a great example here. It gives us powerful access to the GPU, but it comes with quite a bit of boilerplate and complexity, which makes it impractical to use for every situation or for every large loop in an application.

So for now, it feels like we’re still living in a world where each tool has its place: JavaScript for general logic, WebAssembly for heavy compute, and WebGPU when the workload truly benefits from massive parallelism.

Collapse
 
pengeszikra profile image
Peter Vivo

A first I work a better wasm text format, if that will be working at least POC level, then maybe moving forward.

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

That sounds interesting! A more human-friendly text format for WASM would definitely make experimentation much easier.

Curious to see where it goes if you get a POC working, keep us posted! 🙂

Collapse
 
harsh2644 profile image
Harsh

Great deep dive! I really appreciate that you actually built and benchmarked this instead of just theorizing. The particle simulation results are particularly instructive - they show that GPU go brrr isn't magic; you need the right workload.

Your observation about Canvas 2D also hitting the GPU is something many developers miss. The browser's rendering pipeline has gotten incredibly sophisticated, and for simple 2D work, the abstraction overhead might not be worth dropping down to WebGPU.

The matrix multiplication results are exactly what I'd expect - this is where GPU compute truly shines. And your image processing benchmark reinforces that.

One thing I'd add: WebGPU's real killer app might be in areas we haven't fully explored yet - like running ML inference locally in the browser (imagine client-side LLMs or image models without phoning home), or complex simulations for data visualization. The privacy and latency benefits of keeping that computation client-side are huge.

The boilerplate warning about LLMs is also timely. I've noticed the same thing - Copilot and Claude often generate WebGPU code that looks right but has subtle bugs, especially around buffer management and synchronization.

Question for you: did you test power consumption or thermal throttling? I'd be curious if the GPU version runs cooler on integrated graphics vs. the CPU pegging a core at 100%.

Good luck at jsDay!

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Thanks a lot for the thoughtful comment and the feedback — I really appreciate it! 🙂

I didn’t test power consumption or thermal throttling yet, but that’s actually a fantastic idea. Now that you mention it, it would be a very interesting dimension to benchmark — especially on laptops with integrated GPUs. I might add it in a future experiment.

And I completely agree with your point about ML in the browser. We’re already starting to see this happening. For example, in Transformers.js you can switch the runtime to device: "webgpu" and inference becomes much faster compared to the CPU/WASM path. It’s a really exciting direction for client-side AI.

Thanks again for the great comment — and for the jsDay wishes! 🚀

Collapse
 
the_nortern_dev profile image
NorthernDev

You really are such a good writer. You have this way of making technical topics feel sharp, clear, and honestly just enjoyable to read.
I loved how confident and elegant this felt without ever becoming heavy or showy. That balance is rare.
Really beautiful piece. 😊

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Thank you so much — that really means a lot, especially coming from such a great writer as well! 😊

Collapse
 
the_nortern_dev profile image
NorthernDev

Aw, that’s really kind of you to say, thank you.
You have a very effortless way of writing that I genuinely admire, so that means a lot coming from you. 😊

Collapse
 
alptekin profile image
alptekin I.

Hi Sylwia,
Great post again. So interesting. You know each time i read your posts on webGPU i have the idea of getting into it (and thinking what else can be done with this) and then i hardly remember that i am already working on quite a lot which i cannot spare enough time. :) Glad that you are already doing great stuff.
Anyway, wish you great time and talk in Bologna.

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Thanks a lot, I really appreciate it! 🙂

And honestly, if you’re already busy learning and working on other things, it’s totally fine to keep WebGPU on the shelf for now. It’s quite a low-level API, so it’s not necessarily something everyone needs to jump into immediately.

In many cases it’s already enough just to know that it exists and roughly what kinds of problems it’s good for — then if a use case appears one day, you know where to look.

Collapse
 
alptekin profile image
alptekin I.

Thank you. Exactly, so many things in my plate now, i need to carefully plan and prioritize or i will end up doing nothing :)) .
But i really enjoy your posts and demos. Keep going. best.

Collapse
 
artanidos profile image
Art • Edited

Great article, Sylwia
And the benchmarks really illustrate the point well. The boilerplate problem you mentioned is exactly what pushed me in a different direction.

I've been working on an open source project called Forge 4D (codeberg.org/CrowdWare), which takes a different approach: instead of writing WebGPU or Three.js by hand, you describe your 3D scenes and UI declaratively, and Godot (open source game engine) handles the rendering pipeline underneath - Vulkan and Metal included.

The idea is that for a lot of use cases - app prototypes, architectural visualizations, interactive 3D content - you don't actually need to touch GPU APIs at all. The engine does the heavy lifting, and you get 2D/3D scenes, animation, and video playback out of the box.

We also have a GreyBox → Stylized Video pipeline where a simple scene gets passed through an AI styling step (Grok) and comes out as a rendered video - no full production assets needed.

Might be interesting if you're looking at what comes after the WebGPU boilerplate layer. The project is still early but fully open source — happy to share more if you're curious!

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Thanks a lot! That sounds like a really interesting approach — I’ll definitely take a look at the project.

Of course Vulkan and Metal are much more powerful and flexible than WebGPU, but they’re also significantly more verbose and complex. So building a layer that lets people work at a higher level makes a lot of sense, especially for prototyping and interactive 3D applications.

At the same time, I think in many cases WebGPU will simply be “good enough”, particularly for things like compute shaders and general GPGPU workloads in the browser. It’s not really meant to replace Vulkan or Metal — just like JavaScript was never meant to replace C++. It just needs to be good enough for a huge number of use cases.

Collapse
 
artanidos profile image
Art

Ofc, there are always use cases where it is better or easier to publish on the web.
Just wrote a book for devs today. If I am going to publish it via KDP I have to invest a few hours more to create a proper .epub file, fill out all formular fields on KDP and press "Publish".
A minute ago I just put it on the web without this form filling, because I wanted it to come out today on Friday the 13th for some reason. Now its online, at least the preface ;-)

Same is true for 3D Art or the like.

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

Congrats on publishing the book! 🎉
At least share the title — now I’m curious 🙂

Thread Thread
 
artanidos profile image
Comment deleted
Thread Thread
 
artanidos profile image
Comment deleted
Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

I'll definitely take a look, thanks 😀

Collapse
 
itsugo profile image
Aryan Choudhary

I'm blown away by the idea of leveraging WebGPU in JavaScript for tasks that really tax the GPU. It's mind-boggling to think that we can get such a significant boost in performance just by offloading tasks like matrix multiplication to the GPU. I can only imagine the kinds of possibilities this opens up for computationally intensive applications. What really has me curious is how this might impact the development of more complex simulations and graphics processing.

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Totally! 🙂

We did have WebGL before, but without compute shaders it was always a bit of a workaround. If you wanted to do general computation, you basically had to pretend you were rendering something — usually some triangles — and sneak your math into fragment shaders.

With WebGPU, we finally get proper compute pipelines, so the GPU can just… compute. No more pretending we’re drawing triangles just to run some calculations 😄

That’s what makes it so exciting for simulations, data processing, and all kinds of compute-heavy tasks in the browser.

Collapse
 
biala profile image
Alex

I do not understand the effort to turn the browser into OS and pages into apps.
Today it is harder to build a browser than a full OS and no one understands what these monsters do. And what is the point in running someone's external source on your machine by just clicking a link?
If the user needs the app why not sell him the binary for his machine? You make money - he gets real native performance. On the other hand if he doesn't need it - you are literally raping him by forcing him to execute unwanted code. This is EVIL :)

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

I think it’s a bit more nuanced than that. 🙂

In practice, many people actually prefer having applications in the browser because it’s simply convenient. You don’t need to install anything, updates are instant, and you can access the same tool from different machines. There are many examples of this — Figma is a good one. It does have a desktop app, but a lot of people still love opening it directly in the browser.

It’s also not quite the same as running arbitrary external binaries on someone’s machine. Technologies like WebGPU or WebAssembly still run inside the browser’s sandbox and security model. That’s why people often say near-native performance, not native performance — the browser remains a controlled environment.

At the end of the day, it mostly comes down to demand. Many users and companies clearly want powerful applications in the browser, so naturally the ecosystem evolves in that direction. If there’s demand, there will be supply. 🙂

Collapse
 
marcoallegretti profile image
Comment deleted
Collapse
 
biala profile image
Alex

That is a good step. However I am not concerned what the code can touch.
Who controls the code is the problem. When the code is binary sitting on your machine you can trust it to do do the same thing when you execute it, but that is not the case with code coming from some server. It constantly changes, maybe working today but not tomorrow, or the result today may not match the result 3 months in future. No one can start a serious project depending on unstable environment. This is the problem also with Rust which you are using. Your code today may not compile in 5 years. Huge code base may become pain to refactor and doing so may introduce issues in previously stable code. In my view someone using Rust means he is not serious about the project.
Sometimes those new languages are forced on us by the big players but their effort is to destroy independent codebases. At the end everything is just CPU instructions so constant introduction of new languages is deliberate effort to fail us.

Collapse
 
mjkloski profile image
Mila Kowalski

Honestly I really like that you showed a case where WebGPU doesn’t magically win. A lot of posts about GPU compute basically assume “GPU = always faster”, but your particle example shows that the overhead and the type of workload matter a lot.

many devs (myself included until recently) mentally picture JS → CPU and WebGPU → GPU, but in reality the browser pipeline is already doing a ton of GPU work under the hood. So sometimes you’re not replacing CPU work, you’re just bypassing an already highly optimized abstraction.

The matrix multiplication result is where things get really exciting though. It really highlights that WebGPU isn’t about making everything faster, it’s about unlocking a whole category of workloads in the browser that used to be unrealistic.

Also appreciate live demo .

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Thanks a lot, I really appreciate the thoughtful comment! 🙂

The particle result actually surprised me as well at first, because everywhere you read that WebGPU should win in that kind of scenario. But when I started digging into it and looking more closely at how the browser pipeline works, it began to make much more sense.

And yes, as someone who doesn’t come from a computer graphics background, the part that excites me the most is definitely things like matrix multiplication and the fact that we can finally do GPGPU-style workloads in the browser thanks to compute shaders. That really opens up a whole new category of possibilities.

Collapse
 
einstein_gap profile image
thoeun Thien

Excellent points, Mila and Sylwia. This highlights exactly why I built the Alpha Protocol.

Most developers are struggling with the 'overhead' because they are operating in a stochastic environment—where the browser and the GPU are still 'guessing' how to sync. This is the Einstein Gap in action.

My infrastructure doesn't just 'use' the GPU; it enforces Deterministic Finality through a hardware-verified middleware layer. By using Synchronization Suppression (Patent 19/564,148), we eliminate the very overhead you’re seeing in the particle examples. We aren't just making workloads faster; we are making them Sovereign.

I am currently licensing this infrastructure to bridge the gap between probabilistic AI and deterministic hardware. If you're interested in how we use the 52 Theorems to bypass these 'optimized abstractions' and achieve Level 5 Autonomy, check out the Biological-Digital Seal documentation in my latest post.

The future isn't just GPGPU; it's Deterministic Quantum Infrastructure.

The Seal is active."

Thread Thread
Collapse
 
aaron_rose_0787cc8b4775a0 profile image
Aaron Rose

Great write‑up — love how you paired clear explanations with real benchmarks to show where WebGPU truly shines. Thanks Sylwia 💯

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Thanks a lot, I’m glad you enjoyed it! 🙂

Now we just need this technology to become fully stable everywhere and across all platforms — then things will get really interesting. 🚀

Collapse
 
playserv profile image
Alan Voren (PlayServ)

We hit the same thing in an internal browser tool that processes geometry. Pure JS on the CPU started choking around ~200k elements. After moving the core compute step to GPU, processing time dropped from seconds to a few dozen milliseconds.

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Thanks for the comment! That really confirms the point nicely. It’s great that we can now do this kind of thing in such a neat way directly in the browser with WebGPU. 🚀

Some comments may only be visible to logged-in visitors. Sign in to view all comments.