<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: x0101010011</title>
    <description>The latest articles on DEV Community by x0101010011 (@x0101010011).</description>
    <link>https://dev.to/x0101010011</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/x0101010011"/>
    <language>en</language>
    <item>
      <title>Building a Production WebGPU Engine... for a psychotherapy practice?</title>
      <dc:creator>x0101010011</dc:creator>
      <pubDate>Tue, 23 Dec 2025 21:36:00 +0000</pubDate>
      <link>https://dev.to/x0101010011/building-a-production-webgpu-engine-for-a-psychotherapy-practice-43i9</link>
      <guid>https://dev.to/x0101010011/building-a-production-webgpu-engine-for-a-psychotherapy-practice-43i9</guid>
      <description>&lt;h2&gt;
  
  
  The Challenge: Where Noise Becomes Flow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5pscs3lx1tokb874zag.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5pscs3lx1tokb874zag.jpeg" alt="Flow field as psyche metaphor" width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When building the digital presence for &lt;strong&gt;Therapy Warsaw&lt;/strong&gt;, we faced an unusual requirement. We didn't want stock photos or static illustrations. We wanted something that felt &lt;em&gt;alive&lt;/em&gt;—a generative texture that was always changing, but never demanding attention.&lt;/p&gt;

&lt;p&gt;The visual metaphor was simple: complex patterns finding clarity. A field of noise, slowly organizing itself into coherent, flowing lines.&lt;/p&gt;

&lt;p&gt;The technical requirements were less simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Organic &amp;amp; Dense:&lt;/strong&gt; ~10,000 interacting particles.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Performance Critical:&lt;/strong&gt; 60FPS on mobile while users scroll.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Resilient:&lt;/strong&gt; Must work on 10-year-old laptops (WebGL2) &lt;em&gt;and&lt;/em&gt; bleeding-edge devices (WebGPU).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Framework-Free:&lt;/strong&gt; No React, no Three.js. Just controlled, fluid logic.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is how we built a dual-stack &lt;strong&gt;WebGPU + WebGL2&lt;/strong&gt; engine to solve this.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63oqxksctvg5y7kvbaph.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63oqxksctvg5y7kvbaph.jpeg" alt="Physics transitions" width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture: Keeping the UI Responsive
&lt;/h2&gt;

&lt;p&gt;The first rule of heavy graphics on the web: &lt;strong&gt;Get off the main thread.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We strictly separated the application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Main Thread:&lt;/strong&gt; DOM, accessibility, routing, UI state.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Worker Thread:&lt;/strong&gt; Physics, geometry generation, rendering via &lt;code&gt;OffscreenCanvas&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even if the physics simulation hiccups, page scrolling stays smooth. Communication happens via a dedicated messaging system that syncs visual "Presets" (colors, speed, turbulence) without blocking.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// main.js&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;worker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./worker.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;module&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;offscreen&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;transferControlToOffscreen&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// Hand ownership to the worker&lt;/span&gt;
&lt;span class="nx"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;postMessage&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;init&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;offscreen&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;offscreen&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdka9baaalwq2kk9vb1i4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdka9baaalwq2kk9vb1i4.jpeg" alt="#genart" width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why WebGPU? (And Why We Still Needed WebGL2)
&lt;/h2&gt;

&lt;p&gt;We started with &lt;strong&gt;WebGPU&lt;/strong&gt; because Compute Shaders are a natural fit for particle systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  The WebGPU Pipeline
&lt;/h3&gt;

&lt;p&gt;We use &lt;strong&gt;Compute Shaders&lt;/strong&gt; for the heavy lifting:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Map Pass:&lt;/strong&gt; Generates noise textures (Burn, Density, Void maps).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Flow Pass:&lt;/strong&gt; Calculates the vector field.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Life Pass:&lt;/strong&gt; Updates particle ages and handles resets.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Physics Pass:&lt;/strong&gt; Moves particles based on flow vectors.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The key performance win: avoiding CPU-GPU round trips. The entire simulation stays on the GPU.&lt;/p&gt;

&lt;h3&gt;
  
  
  The WebGL2 Fallback
&lt;/h3&gt;

&lt;p&gt;WebGPU support is growing but not universal. We &lt;em&gt;had&lt;/em&gt; to support WebGL2—but we didn't want a "dumb" fallback.&lt;/p&gt;

&lt;p&gt;To achieve feature parity without destroying the CPU, we used &lt;strong&gt;Transform Feedback&lt;/strong&gt;. This allows WebGL2 to update particle positions in the Vertex Shader and write them back to a buffer, mimicking compute shaders.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Spring Physics System
&lt;/h2&gt;

&lt;p&gt;When a user navigates between pages, the visualization morphs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Colors shift (e.g., warm orange → deep blue).&lt;/li&gt;
&lt;li&gt;  Chaos decreases or increases.&lt;/li&gt;
&lt;li&gt;  Speed adjusts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We couldn't just &lt;code&gt;lerp&lt;/code&gt; these values; it looks robotic. We implemented a &lt;strong&gt;Critical Damping Spring System&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;updateSpring&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;dt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tension&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;120&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;friction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;displacement&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;target&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;force&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tension&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;displacement&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;friction&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;velocity&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;velocity&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;force&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;dt&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;velocity&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;dt&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every frame, we update ~20 spring-driven parameters and upload them to a &lt;strong&gt;Uniform Buffer Object (UBO)&lt;/strong&gt;. The result: transitions that feel physical, not computed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Optimization: Procedural Vertex Integration
&lt;/h2&gt;

&lt;p&gt;Rendering thick lines usually means generating 2 triangles (6 vertices) per segment. For long trails, that's expensive memory bandwidth.&lt;/p&gt;

&lt;p&gt;Our approach: store &lt;strong&gt;only the head position&lt;/strong&gt; of each line.&lt;/p&gt;

&lt;p&gt;Inside the Vertex Shader, we run a &lt;code&gt;for&lt;/code&gt; loop (~60 iterations) to re-trace the path &lt;em&gt;backwards&lt;/em&gt; through the flow field, reconstructing the trail on the fly.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Pros:&lt;/strong&gt; Massive bandwidth reduction (1 point per line, not thousands of vertices).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cons:&lt;/strong&gt; Higher ALU cost per vertex.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On modern GPUs, ALU is cheap; bandwidth is expensive. This trade-off let us render thousands of long, smooth trails on mobile.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;The result is &lt;a href="https://therapywarsaw.com" rel="noopener noreferrer"&gt;therapywarsaw.com&lt;/a&gt;—a site where the background is a living simulation, a quiet texture that reflects the nature of the work.&lt;/p&gt;

&lt;p&gt;The engine is open source:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/23x2/generative-flow-field" rel="noopener noreferrer"&gt;github.com/23x2/generative-flow-field&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Questions about the shader pipeline or Transform Feedback? Ask below.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webgpu</category>
      <category>webgl2</category>
      <category>design</category>
      <category>genart</category>
    </item>
  </channel>
</rss>
