DEV Community

Todd Tanner
Todd Tanner

Posted on

Distributed GPU Compute Across Devices in C# on browser and desktop

Coming VERY Soon: Distributed GPU Compute Across Devices

The SpawnDev.WebTorrent P2P network we're building creates a natural foundation for distributed GPU compute. Every connected device exchanges data over WebRTC — extending this to share compute workloads is the next step:

  • SpawnDev.ILGPU AcceleratorType.P2P — 7th Backend — Distributes kernels across connected devices transparently. Same C# kernel code, same LoadAutoGroupedStreamKernel API . The developer writes one kernel, it runs on 1 GPU or 10 GPUs across a household.

  • SpawnDev.ILGPU.ML Model inference sharding — Split a 14B model across multiple devices. Each runs inference on their portion via SpawnDev.ILGPU, passes intermediate tensors to the next peer. A model that doesn't fit on one device runs across your phone, laptop, tablet, and desktop.

  • Volunteer compute pools — Users opt in to donate idle GPU time. Like Folding@Home for ML inference in the browser.

Every device in your home contributing to one shared AI compute pool — phone, laptop, tablet, desktop, old gaming PC. The living room becomes a compute cluster.

Resources

SpawnDev.ILGPU is ready to use NOW, 100%. SpawnDev.ILGPU.ML first official release, 4.0.0, is coming soon, possibly today. Distributed computing is our next tackle. Take a look. This is Ai for everyone, everywhere, on everything. 🖖

Top comments (0)