<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ahmet Barış Günaydın</title>
    <description>The latest articles on DEV Community by Ahmet Barış Günaydın (@barisgunaydin).</description>
    <link>https://dev.to/barisgunaydin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/barisgunaydin"/>
    <language>en</language>
    <item>
      <title>I fused 1,500 GPU dispatches into one. Here's what happened.</title>
      <dc:creator>Ahmet Barış Günaydın</dc:creator>
      <pubDate>Mon, 30 Mar 2026 15:19:30 +0000</pubDate>
      <link>https://dev.to/barisgunaydin/i-fused-1500-gpu-dispatches-into-one-heres-what-happened-39op</link>
      <guid>https://dev.to/barisgunaydin/i-fused-1500-gpu-dispatches-into-one-heres-what-happened-39op</guid>
      <description>&lt;p&gt;Every ML framework does GPU computation the same way: send a task to the GPU, wait, send the next one, wait, repeat. For a 1,500-step simulation, that's 22,500 separate GPU commands per generation.&lt;/p&gt;

&lt;p&gt;I tried something different. I wrote a WebGPU compute shader that runs the entire 1,500-step simulation in a single GPU dispatch. No round-trips. No waiting. The GPU just loops internally.&lt;/p&gt;

&lt;h2&gt;
  
  
  The results (same hardware, no tricks)
&lt;/h2&gt;

&lt;p&gt;On the same Apple M2 Pro:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;WebGPU (Chrome): 46.2 gen/s&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;PyTorch MPS: 0.29 gen/s&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;That's 159x.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On embarrassingly parallel workloads (Rastrigin), they're basically tied (1.06x). The advantage is specific to sequential workloads — simulations, RL rollouts, trading strategies — where each step depends on the previous one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why can't PyTorch just do this?
&lt;/h2&gt;

&lt;p&gt;I tested &lt;code&gt;torch.compile&lt;/code&gt; with the Inductor backend. It tries to unroll the loop into a single computation graph:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Timesteps&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;500&lt;/td&gt;
&lt;td&gt;Works, 2x speedup, 25s compile&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;RecursionError&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5,000&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;OOM killed&lt;/strong&gt; after 30 min&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The compiler crashes because it tries to represent the entire loop as a static graph. WebGPU's approach is different — the shader contains an actual &lt;code&gt;for&lt;/code&gt; loop that runs on the GPU. Simple, but it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  JAX gets closer but not all the way
&lt;/h2&gt;

&lt;p&gt;JAX with &lt;code&gt;lax.scan&lt;/code&gt; + &lt;code&gt;vmap&lt;/code&gt; on a Tesla T4 achieves 6.43 gen/s on the same financial simulation — 13x over PyTorch CUDA on the same T4. XLA does fuse the loop. But it still ends up 7.2x slower than the hand-fused WebGPU shader, likely because the XLA kernel still has per-step overhead internally (register spills, memory traffic).&lt;/p&gt;

&lt;p&gt;At shorter episodes (L=500, Acrobot), JAX nearly closes the gap (1.29x). The fusion advantage scales with episode length.&lt;/p&gt;

&lt;h2&gt;
  
  
  The browser overhead is real but small
&lt;/h2&gt;

&lt;p&gt;I ran the exact same WGSL shader through wgpu-native (Rust, no browser). Native Metal: 326.5 gen/s. Chrome WebGPU: 170.3 gen/s. That's a 1.92x browser tax (48% overhead).&lt;/p&gt;

&lt;p&gt;But here's the weird part: &lt;strong&gt;PyTorch MPS (160.5 gen/s) is slower than WebGPU in Chrome (170.3 gen/s)&lt;/strong&gt; on parallel workloads. The browser's overhead is smaller than PyTorch's framework overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;

&lt;p&gt;I built a benchmark site where you can test your GPU in the browser. No install, no account:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://gpubench.dev" rel="noopener noreferrer"&gt;gpubench.dev&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;~300 people have run it so far — Apple, AMD, NVIDIA, Intel GPUs all working in Chrome. The data is starting to paint an interesting picture of WebGPU performance across real hardware.&lt;/p&gt;

&lt;h2&gt;
  
  
  The paper
&lt;/h2&gt;

&lt;p&gt;Full methodology, 10 tables, same-hardware comparisons, ablation study, all numbers verified:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://doi.org/10.5281/zenodo.19335214" rel="noopener noreferrer"&gt;doi.org/10.5281/zenodo.19335214&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Code (WGSL shaders, Puppeteer benchmarks, Python baselines):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/abgnydn/webgpu-kernel-fusion" rel="noopener noreferrer"&gt;github.com/abgnydn/webgpu-kernel-fusion&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd love to know
&lt;/h2&gt;

&lt;p&gt;Has anyone else used WebGPU compute shaders for non-graphics workloads? I'm curious what other sequential problems would benefit from this fusion pattern — RL environments, Monte Carlo simulations, agent-based models?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Acknowledgment: Drafting assistance by Claude. All experiments, benchmarks, and code by the author.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webgpu</category>
      <category>javascript</category>
      <category>performance</category>
      <category>gpu</category>
    </item>
  </channel>
</rss>
