<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jaysmito Mukherjee</title>
    <description>The latest articles on DEV Community by Jaysmito Mukherjee (@jaysmito101).</description>
    <link>https://dev.to/jaysmito101</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jaysmito101"/>
    <language>en</language>
    <item>
      <title>High Performance GPGPU with Rust and wgpu</title>
      <dc:creator>Jaysmito Mukherjee</dc:creator>
      <pubDate>Sun, 14 Dec 2025 14:46:57 +0000</pubDate>
      <link>https://dev.to/jaysmito101/high-performance-gpgpu-with-rust-and-wgpu-4l9i</link>
      <guid>https://dev.to/jaysmito101/high-performance-gpgpu-with-rust-and-wgpu-4l9i</guid>
      <description>&lt;h1&gt;
  
  
  High Performance GPGPU with Rust and wgpu
&lt;/h1&gt;

&lt;p&gt;General Purpose Graphics Processing Unit programming, or GPGPU, has transformed high-performance computing. By offloading parallelizable tasks to the massive number of cores available on modern graphics cards, developers can achieve performance gains spanning orders of magnitude compared to CPU execution. While CUDA has long been the standard, the ecosystem is evolving. The &lt;code&gt;wgpu&lt;/code&gt; crate in Rust offers a compelling, portable, and safe alternative that runs on Vulkan, Metal, DirectX 12, and even inside web browsers via WebGPU. This article explores how to leverage &lt;code&gt;wgpu&lt;/code&gt; for compute workloads, moving beyond rendering triangles to processing raw data.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture of a Compute Application
&lt;/h2&gt;

&lt;p&gt;A GPGPU application differs significantly from a traditional rendering loop. In a graphics context, the pipeline is complex, involving vertex shaders, fragment shaders, rasterization, and depth buffers. A compute pipeline is refreshingly simple by comparison. It consists primarily of data buffers and a compute shader. The workflow involves initializing the GPU device, loading the shader code, creating memory buffers accessible by the GPU, and dispatching "workgroups" to execute the logic.&lt;/p&gt;

&lt;p&gt;The core abstraction in &lt;code&gt;wgpu&lt;/code&gt; involves the Instance, Adapter, Device, and Queue. The Instance is the entry point to the API. The Adapter represents the physical hardware card. The Device is the logical connection that allows you to create resources, and the Queue is where you submit command buffers for execution. Unlike graphics rendering which requires a windowing surface, a compute context can run entirely "headless," making it ideal for background processing tools or server-side applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing the Kernel in WGSL
&lt;/h2&gt;

&lt;p&gt;The logic executed on the GPU is written in the WebGPU Shading Language (WGSL). This language feels like a blend of Rust and GLSL. For a compute shader, we define an entry point decorated with the &lt;code&gt;@compute&lt;/code&gt; attribute and specify a workgroup size. The GPU executes this function in parallel across a 3D grid.&lt;/p&gt;

&lt;p&gt;Consider a simple kernel that performs vector multiplication. We define a storage buffer to hold our input and output data. The built-in variable &lt;code&gt;global_invocation_id&lt;/code&gt; allows us to determine which specific element of the array the current thread should process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// shader.wgsl&lt;/span&gt;
&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="nf"&gt;group&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="nf"&gt;binding&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;var&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;read_write&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;f32&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="nf"&gt;workgroup_size&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="nf"&gt;builtin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;global_invocation_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;global_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;vec3&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;u32&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;global_id&lt;/span&gt;&lt;span class="py"&gt;.x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c1"&gt;// Guard against out-of-bounds access if the array size &lt;/span&gt;
    &lt;span class="c1"&gt;// isn't a perfect multiple of the workgroup size&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nf"&gt;arrayLength&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the code above, the workgroup size is set to 64. When we dispatch work from the Rust side, we will calculate how many groups of 64 are needed to cover our data array. The logic inside the function is simple, but the hardware will execute thousands of these instances simultaneously.&lt;/p&gt;

&lt;h2&gt;
  
  
  Buffer Management and Bind Groups
&lt;/h2&gt;

&lt;p&gt;Memory management is the most critical aspect of GPGPU programming. The CPU and GPU often have distinct memory spaces. To bridge this gap, &lt;code&gt;wgpu&lt;/code&gt; uses buffers. For a compute operation, we typically need a Storage Buffer, which allows the shader to read and write arbitrary data. However, CPU read access to GPU memory is slow or impossible directly. Therefore, we often use a Staging Buffer strategy. We create a buffer on the GPU for processing and a separate buffer that can be mapped for reading by the CPU.&lt;/p&gt;

&lt;p&gt;Once the buffers are created, we must tell the shader where to find them. This is done via Bind Groups. A Bind Group Layout describes the interface—stating that binding slot 0 is a storage buffer. The Bind Group itself connects the actual &lt;code&gt;wgpu::Buffer&lt;/code&gt; object to that slot. This strict separation of layout and data allows &lt;code&gt;wgpu&lt;/code&gt; to validate resource usage before the GPU ever sees a command, preventing many common crashes associated with low-level graphics APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dispatching the Work
&lt;/h2&gt;

&lt;p&gt;With the pipeline created and data uploaded, we proceed to command encoding. We create a &lt;code&gt;CommandEncoder&lt;/code&gt; and begin a compute pass. Inside this pass, we set the pipeline, set the bind group containing our data buffers, and call &lt;code&gt;dispatch_workgroups&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The dispatch call requires understanding the grid dimensionality. If we have an array of 1024 elements and a shader workgroup size of 64, we must dispatch 16 workgroups on the X-axis (1024 divided by 64).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;encoder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="nf"&gt;.create_command_encoder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nn"&gt;wgpu&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;CommandEncoderDescriptor&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;label&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;None&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;cpass&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;encoder&lt;/span&gt;&lt;span class="nf"&gt;.begin_compute_pass&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nn"&gt;wgpu&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ComputePassDescriptor&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
        &lt;span class="n"&gt;label&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
        &lt;span class="n"&gt;timestamp_writes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;None&lt;/span&gt; 
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="n"&gt;cpass&lt;/span&gt;&lt;span class="nf"&gt;.set_pipeline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;compute_pipeline&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;cpass&lt;/span&gt;&lt;span class="nf"&gt;.set_bind_group&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;bind_group&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;[]);&lt;/span&gt;
    &lt;span class="n"&gt;cpass&lt;/span&gt;&lt;span class="nf"&gt;.dispatch_workgroups&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data_size&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After dispatching, if we intend to read the results back to the CPU, we must issue a copy command. This command copies the data from the GPU-resident storage buffer into a map-readable staging buffer. Finally, we finish the encoder and submit the command buffer to the queue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Asynchronous Readback
&lt;/h2&gt;

&lt;p&gt;One aspect of &lt;code&gt;wgpu&lt;/code&gt; that often trips up developers coming from blocking APIs is its asynchronous nature. Submitting the work to the queue returns immediately, but the GPU has only just received the instructions. To read the data back, we must map the staging buffer. This is an async operation returning a &lt;code&gt;Future&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To resolve this, the application must poll the device. In a native environment, we call &lt;code&gt;device.poll(wgpu::Maintain::Wait)&lt;/code&gt;. This blocks the main thread until the GPU operations are complete and the map callback has fired. Once the buffer is mapped, we can cast the raw bytes back into a Rust slice, copy the data to a local vector, and unmap the buffer. This creates a synchronization point, ensuring the GPU has finished its heavy lifting before the CPU attempts to interpret the results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;wgpu&lt;/code&gt; ecosystem provides a robust foundation for GPGPU programming that prioritizes safety and portability without sacrificing the raw parallel power of the hardware. By standardizing on WGSL and the WebGPU resource model, developers can write compute kernels that run seamlessly on desktop, mobile, and web. While the boilerplate for setting up pipelines and managing memory buffers is more verbose than high-level CPU threading, the payoff is the ability to process massive datasets in parallel, unlocking performance capabilities that are simply unattainable on the CPU alone.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>performance</category>
      <category>rust</category>
    </item>
    <item>
      <title>TerraGen3D
3D Procedural Terrain Generation Tool in OpenGL/C++</title>
      <dc:creator>Jaysmito Mukherjee</dc:creator>
      <pubDate>Fri, 01 Oct 2021 09:31:29 +0000</pubDate>
      <link>https://dev.to/jaysmito101/terragen3d-3d-procedural-terrain-generation-tool-in-opengl-c-375f</link>
      <guid>https://dev.to/jaysmito101/terragen3d-3d-procedural-terrain-generation-tool-in-opengl-c-375f</guid>
      <description>&lt;p&gt;I am making a 3D Procedural Generation Software Completely opensource and free!&lt;/p&gt;

&lt;p&gt;Get it:&lt;br&gt;
&lt;a href="https://github.com/Jaysmito101/TerraGen3D"&gt;https://github.com/Jaysmito101/TerraGen3D&lt;/a&gt;&lt;br&gt;
&lt;a href="https://sourceforge.net/projects/terragen3d/"&gt;https://sourceforge.net/projects/terragen3d/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tutorials : &lt;a href="https://www.youtube.com/playlist?list=PLl3xhxX__M4A74aaTj8fvqApu7vo3cOiZ"&gt;https://www.youtube.com/playlist?list=PLl3xhxX__M4A74aaTj8fvqApu7vo3cOiZ&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Join the Discord Server : &lt;a href="https://discord.gg/AcgRafSfyB"&gt;https://discord.gg/AcgRafSfyB&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  What can this do?
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Generte 3D Terrain Procedrally&lt;/li&gt;
&lt;li&gt;Export Terrain mesh as OBJ&lt;/li&gt;
&lt;li&gt;You can write and test your own shaders&lt;/li&gt;
&lt;li&gt;An Inbuilt IDE for shaders&lt;/li&gt;
&lt;li&gt;Test under different lighting&lt;/li&gt;
&lt;li&gt;A 3D viewer&lt;/li&gt;
&lt;li&gt;A Node based as well as Layer based workflow&lt;/li&gt;
&lt;li&gt;Save the project(custom &lt;code&gt;.terr3d&lt;/code&gt; files)&lt;/li&gt;
&lt;li&gt;Hieght map visualizer in node editor&lt;/li&gt;
&lt;li&gt;Wireframe mode&lt;/li&gt;
&lt;li&gt;Custom Lighiting&lt;/li&gt;
&lt;li&gt;Customizable Geometry Shaders included in rendering pipeline&lt;/li&gt;
&lt;li&gt;Skyboxes&lt;/li&gt;
&lt;li&gt;Multithreded Mesh Generation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.lua.org/"&gt;Lua&lt;/a&gt; scripting to add custom algotrithms&lt;/li&gt;
&lt;li&gt;Export to heightmaps(both PNG and also custom format)&lt;/li&gt;
&lt;li&gt;Custom Skyboxes&lt;/li&gt;
&lt;li&gt;Completely usable 3D procedural modelling and texturing pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Future Goals
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Procedural grass and foliage&lt;/li&gt;
&lt;li&gt;Fix more bugs!&lt;/li&gt;
&lt;li&gt;Many more things..&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Screenshots
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SpvamjnJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Jaysmito101/TerraGen3D/master/Screenshots/Version%25203/Screenshot%2520%281%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SpvamjnJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Jaysmito101/TerraGen3D/master/Screenshots/Version%25203/Screenshot%2520%281%29.png" alt="Screenshot 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HxI6XH14--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Jaysmito101/TerraGen3D/master/Screenshots/Version%25203/Screenshot%2520%282%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HxI6XH14--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Jaysmito101/TerraGen3D/master/Screenshots/Version%25203/Screenshot%2520%282%29.png" alt="Screenshot 2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_eoefYRh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Jaysmito101/TerraGen3D/master/Screenshots/Version%25203/Screenshot%2520%283%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_eoefYRh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Jaysmito101/TerraGen3D/master/Screenshots/Version%25203/Screenshot%2520%283%29.png" alt="Screenshot 3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Support
&lt;/h1&gt;

&lt;p&gt;I am just a Highschool student so I may not have the best quality of code but still i am trying my best to write good code!&lt;/p&gt;

&lt;p&gt;Any support would be highly appretiated!&lt;/p&gt;

&lt;p&gt;For example you could add a feature and contribute via pull requests or you could even report any issues with the program!&lt;/p&gt;

&lt;p&gt;And the best thing you could do to support this project is spread word about this so that more people who might be interested in this may use this!&lt;/p&gt;

&lt;p&gt;Please considering tweeting about this! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://ctt.ac/MX5_c"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bsDDv_CG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://clicktotweet.com/img/tweet-graphic-4.png" alt="Tweet: Check out TerraGen3D Free and Open Source Procedural Modelling and Texturing Software : https://github.com/Jaysmito101/TerraGen3D"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Join the Discord Server : &lt;a href="https://discord.gg/AcgRafSfyB"&gt;https://discord.gg/AcgRafSfyB&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
