JavaScript has been the main language of the web for years. Its popularity probably surprised even its creator, Brendan Eich, who famously built the first version of the language in just about a week.
One of the reasons JavaScript became so dominant is the sheer power of the browser. From a single application — the browser — we can access hundreds of millions of websites and applications. No need to download or install anything. That's huge part of the success.
And browser vendors have been working incredibly hard to make JavaScript faster and faster. Modern engines are extremely optimized. But there is still one fundamental limitation:
JavaScript runs on the CPU.
So what about tasks that are perfect for the GPU?
This is where WebGPU enters the scene. 🚀
Let’s take a look at what it can actually do — and when it really makes sense to use it.
By the way — my jsDay talk is getting closer and closer! I’ll be speaking there about **WebGPU + WebAssembly, which is exactly the kind of things you see in this article: GPUs in the browser, compute shaders, and pushing the web a bit further than usual.
To celebrate that (and maybe calm my pre-conference nerves a little 😅), I recorded a short promo reel for the talk which you can find here
If you feel like watching it and dropping a like, I’d really appreciate it.
And if you’re coming to **jsDay, come say hi after the talk! 🙂
A Quick Note Before We Start
I already wrote about WebGPU in another article, so I won’t repeat the full introduction here:
Why WebGPU Feels Like the Future of the Web (Live Demo)
But let’s briefly recap one important thing.
WebGPU is not just about graphics — although it does that beautifully.
It also gives us something extremely powerful:
access to GPU compute.
CPU vs GPU (Quick Reminder)
This will be obvious to most of you, but let's make a quick distinction for beginners and those who slept through their first semester of college:
CPUs are great at doing a few complex things one after another.
In the browser, that usually means JavaScript or WebAssembly.
GPUs are great at doing simple things massively in parallel.
And in the browser, the API that lets us use them is WebGPU.
That’s why GPUs are so good at tasks where the same operation needs to be repeated thousands or millions of times.
I Wanted to Test It Myself
If you read my posts regularly, you probably know I don’t like taking things on faith. I prefer to try myself. 🙂
So I built a small application that benchmarks JavaScript vs WebGPU.
These are not super-academic benchmarks where the exact same algorithm is implemented line-by-line in different systems. I probably wouldn't have gotten a PhD from MIT thanks to them. 😅
Instead, I tried something more practical:
I tested how both technologies behave when they solve the same problem in a way that is natural for them, without intentionally favoring either one.
You can explore everything here:
GitHub repo
https://github.com/sylwia-lask/webgpu-bench
Live demo
https://sylwia-lask.github.io/webgpu-bench/
Feel free to play with it yourself. 😄
Scenario 1 — Particle Simulation
The first test was a particle simulation.
If you read about WebGPU online — or ask ChatGPT — this is usually presented as a classic example of GPU superiority.
Each particle has two properties:
- position
(x, y) - velocity
(vx, vy)
Every frame we update it like this:
x = x + vx
y = y + vy
And if the particle hits the screen border, we reverse the velocity to simulate a bounce.
Pseudo-code:
for each particle:
pos += vel
if pos.x < 0 or pos.x > width:
vel.x = -vel.x
if pos.y < 0 or pos.y > height:
vel.y = -vel.y
So the compute shader effectively performs something like:
pos += vel
That’s basically two float additions per particle (plus a bounce check).
The Result
Surprisingly… there was almost no difference between the JavaScript and WebGPU implementations. Both versions produced very similar FPS.
Meanwhile, the WebGPU version required much more boilerplate code.
Why does that happen?
1️⃣ The algorithm is extremely simple
The particle update does only 2–4 floating point operations per thread.
GPUs really shine when the work is compute-heavy, not when it’s this lightweight.
2️⃣ Canvas 2D also ends up on the GPU
This is something many frontend developers don’t realize.
Even when you use Canvas 2D, the browser often renders it using GPU acceleration.
Browsers like Chrome or Edge internally use systems like Skia or Dawn, which eventually issue draw calls to the GPU.
So in practice:
- WebGPU → you talk directly to the GPU
- Canvas 2D → the browser talks to the GPU for you
And the browser is very well optimized for things like fillRect().
So the CPU version isn’t as “CPU-only” as people often think.
Could GPU Win Here?
Probably yes — but only if we made the simulation more complex.
For example, something like n-body gravity, where every particle attracts every other particle. That would dramatically increase the amount of math.
But honestly… I was too lazy to implement that. 😅
Scenario 2 — Matrix Multiplication
Now let’s look at something GPUs absolutely love.
Matrix multiplication.
Despite the scary name, the idea is simple. Imagine two grids of numbers. To compute one cell in the result matrix:
- we take one row from the first matrix
- one column from the second matrix
- multiply numbers pairwise
- add the results together
Example:
[1 2] [5 6]
[3 4] × [7 8]
To compute the top-left cell:
1×5 + 2×7 = 19
And this operation must be repeated for every cell in the result matrix.
For large matrices, that quickly becomes millions of multiplications.
Which is exactly the kind of workload GPUs were designed for.
The Result
Here the result was very clear.
WebGPU absolutely crushes JavaScript.
And the larger the matrices get, the bigger the difference becomes.
This makes perfect sense:
Matrix multiplication is essentially the same simple operation repeated thousands or millions of times — the exact scenario where GPUs shine.
And let's remember it's one of the most important operations in computer science. Matrix multiplication is heavily used in:
- computer graphics
- physics simulations
- scientific computing
- and of course… our beloved LLMs 🤖
Scenario 3 — Image Processing Pipeline
The third benchmark tested something closer to traditional graphics work: an image processing pipeline.
Here the GPU once again completely dominates the CPU implementation.
This kind of workload is very natural for GPUs:
- every pixel can be processed independently
- the same operation is applied to thousands or millions of pixels
Which again fits the GPU execution model perfectly.
So Should We Replace JavaScript With WebGPU?
Of course not. 🙂
WebGPU is powerful — but it only makes sense for certain types of problems. In general, WebGPU shines when you need to:
- perform many simple operations
- on large amounts of data
- in parallel
For regular application logic, JavaScript remains the perfect tool.
There Are Still Some Practical Limitations
WebGPU is also still a relatively young technology.
If you control the environment and can require users to run modern browsers, you can absolutely start experimenting with it.
But if you’re building something for a wide audience — where someone might open your app in a strange Android browser from 2018 — you should probably be careful.
Or implement a fallback, for example using WebGL.
The Boilerplate Problem
If you check the repository, you’ll notice that WebGPU requires quite a bit of setup.
You need to:
- request an adapter
- request a device
- create buffers
- configure pipelines
- manage command encoders
- and so on.
There’s a lot of boilerplate.
Yes, coding agents like Claude or ChatGPT can help with this.
But here’s a small warning ⚠️
WebGPU is still new, and LLMs are not always great at generating correct WebGPU code. Sometimes you will still need to go back to the classic developer workflow:
- reading documentation
- browsing GitHub issues
- debugging things manually
Just like in the good old days. 😄
Final Thoughts
The question is no longer whether WebGPU will become important.
The real question is how soon we will need it.
Because WebGPU is essentially a new, modern standard for working with GPUs in the browser.
And for the kinds of problems GPUs were designed to solve — it can be incredibly powerful. 🚀



Top comments (9)
I think the root problem is our programming language don't give help to GPU programming paralell with a multi core CPU. So I think the best solution will be creating a programming language based on that root.
That's an interesting idea, and it makes me curious whether anyone is actively working on something like that.
When WebAssembly became a standard, there was a lot of talk about how JavaScript would soon be replaced. But that didn’t really happen. Instead, JavaScript kept getting faster, and WASM mostly found its niche in heavy computations, rather than replacing JS for everyday application logic.
WebGPU is also a great example here. It gives us powerful access to the GPU, but it comes with quite a bit of boilerplate and complexity, which makes it impractical to use for every situation or for every large loop in an application.
So for now, it feels like we’re still living in a world where each tool has its place: JavaScript for general logic, WebAssembly for heavy compute, and WebGPU when the workload truly benefits from massive parallelism.
A first I work a better wasm text format, if that will be working at least POC level, then maybe moving forward.
Great deep dive! I really appreciate that you actually built and benchmarked this instead of just theorizing. The particle simulation results are particularly instructive - they show that GPU go brrr isn't magic; you need the right workload.
Your observation about Canvas 2D also hitting the GPU is something many developers miss. The browser's rendering pipeline has gotten incredibly sophisticated, and for simple 2D work, the abstraction overhead might not be worth dropping down to WebGPU.
The matrix multiplication results are exactly what I'd expect - this is where GPU compute truly shines. And your image processing benchmark reinforces that.
One thing I'd add: WebGPU's real killer app might be in areas we haven't fully explored yet - like running ML inference locally in the browser (imagine client-side LLMs or image models without phoning home), or complex simulations for data visualization. The privacy and latency benefits of keeping that computation client-side are huge.
The boilerplate warning about LLMs is also timely. I've noticed the same thing - Copilot and Claude often generate WebGPU code that looks right but has subtle bugs, especially around buffer management and synchronization.
Question for you: did you test power consumption or thermal throttling? I'd be curious if the GPU version runs cooler on integrated graphics vs. the CPU pegging a core at 100%.
Good luck at jsDay!
Thanks a lot for the thoughtful comment and the feedback — I really appreciate it! 🙂
I didn’t test power consumption or thermal throttling yet, but that’s actually a fantastic idea. Now that you mention it, it would be a very interesting dimension to benchmark — especially on laptops with integrated GPUs. I might add it in a future experiment.
And I completely agree with your point about ML in the browser. We’re already starting to see this happening. For example, in Transformers.js you can switch the runtime to
device: "webgpu"and inference becomes much faster compared to the CPU/WASM path. It’s a really exciting direction for client-side AI.Thanks again for the great comment — and for the jsDay wishes! 🚀
I'm blown away by the idea of leveraging WebGPU in JavaScript for tasks that really tax the GPU. It's mind-boggling to think that we can get such a significant boost in performance just by offloading tasks like matrix multiplication to the GPU. I can only imagine the kinds of possibilities this opens up for computationally intensive applications. What really has me curious is how this might impact the development of more complex simulations and graphics processing.
Totally! 🙂
We did have WebGL before, but without compute shaders it was always a bit of a workaround. If you wanted to do general computation, you basically had to pretend you were rendering something — usually some triangles — and sneak your math into fragment shaders.
With WebGPU, we finally get proper compute pipelines, so the GPU can just… compute. No more pretending we’re drawing triangles just to run some calculations 😄
That’s what makes it so exciting for simulations, data processing, and all kinds of compute-heavy tasks in the browser.
Have a great time at jsDay! Will the talks be recorded? Would be super interesting to watch 🙂
Thank you so much!
Yes, the talks will be recorded, and I’ll definitely share the recording once it’s available. 🚀