DEV Community

Cover image for Rust+WASM+WebGL vs React+Three.js: When Going Framework-Free Actually Makes Sense
Alan West
Alan West

Posted on

Rust+WASM+WebGL vs React+Three.js: When Going Framework-Free Actually Makes Sense

A Reddit post caught my eye this week — someone built a real-time flight tracker using Rust, WebAssembly, and raw WebGL. No React. No Three.js. No frameworks at all. My first reaction was "why would you do that to yourself?" My second reaction, after looking at the results, was "okay, I get it now."

I've shipped projects on both sides of this fence. I've built 3D visualizations with Three.js and React Three Fiber, and I've also gone down the rabbit hole of writing raw WebGL shaders. Let me walk through when each approach actually makes sense — and what the tradeoffs look like in practice.

Why Even Compare These?

The typical stack for a browser-based 3D project in 2024-2025 looks something like React + Three.js (or React Three Fiber if you want that declarative feel). It's productive, well-documented, and there's a massive ecosystem of helpers.

But there's a growing counter-movement: compile Rust to WebAssembly, talk directly to WebGL (or WebGPU), and skip the JavaScript framework layer entirely. The flight tracker post is a great example — real-time data, thousands of moving objects, smooth globe rendering.

The question isn't "which is better" — it's "which is better for what."

The Framework Approach: React + Three.js

Here's what a basic globe with moving points looks like in React Three Fiber:

import { Canvas, useFrame } from '@react-three/fiber';
import { useRef } from 'react';

function Flight({ position, destination }) {
  const meshRef = useRef();

  useFrame((state, delta) => {
    // Lerp toward destination each frame
    meshRef.current.position.lerp(destination, delta * 0.5);
  });

  return (
    <mesh ref={meshRef} position={position}>
      <sphereGeometry args={[0.02, 8, 8]} />
      <meshBasicMaterial color="#00ff88" />
    </mesh>
  );
}

function Globe() {
  return (
    <Canvas camera={{ position: [0, 0, 3] }}>
      <ambientLight intensity={0.5} />
      {/* Each flight is its own React component */}
      {flights.map(f => (
        <Flight key={f.id} position={f.pos} destination={f.dest} />
      ))}
    </Canvas>
  );
}
Enter fullscreen mode Exit fullscreen mode

What's great here:

  • Readable, declarative scene graph
  • Hot module reloading works out of the box
  • Huge ecosystem (drei helpers, postprocessing, physics)
  • You can ship a prototype in a weekend

What's not great:

  • Each flight is a React component with its own reconciliation overhead
  • At 5,000+ moving objects, you start feeling the GC pauses
  • The abstraction layers (React → R3F → Three.js → WebGL) add latency
  • Bundle size creeps up fast — Three.js alone is ~150KB gzipped

The Bare Metal Approach: Rust + WASM + WebGL

Here's roughly how the same concept looks when you go direct:

// Vertex shader for instanced flight rendering
const VERTEX_SHADER: &str = r#"
  attribute vec3 a_position;
  attribute vec3 a_instance_pos;  // per-flight position via instancing
  uniform mat4 u_view_proj;

  void main() {
    vec3 world_pos = a_position * 0.02 + a_instance_pos;
    gl_Position = u_view_proj * vec4(world_pos, 1.0);
  }
"#;

// Update all flight positions in one tight loop
pub fn update_flights(flights: &mut [Flight], dt: f32) {
    for flight in flights.iter_mut() {
        // Simple lerp — no allocations, no GC
        flight.position.x += (flight.destination.x - flight.position.x) * dt * 0.5;
        flight.position.y += (flight.destination.y - flight.position.y) * dt * 0.5;
        flight.position.z += (flight.destination.z - flight.position.z) * dt * 0.5;
    }
}
Enter fullscreen mode Exit fullscreen mode

And the WASM-to-WebGL bridge using web-sys:

use web_sys::{WebGlRenderingContext as GL, WebGlBuffer};

pub fn upload_positions(gl: &GL, buffer: &WebGlBuffer, flights: &[Flight]) {
    let data: Vec<f32> = flights
        .iter()
        .flat_map(|f| [f.position.x, f.position.y, f.position.z])
        .collect();

    gl.bind_buffer(GL::ARRAY_BUFFER, Some(buffer));
    // Upload all positions in a single GPU call
    unsafe {
        let view = js_sys::Float32Array::view(&data);
        gl.buffer_sub_data_with_i32_and_array_buffer_view(
            GL::ARRAY_BUFFER, 0, &view
        );
    }
}
Enter fullscreen mode Exit fullscreen mode

What's great here:

  • Predictable performance — no GC, no reconciliation
  • GPU instancing means 10,000 flights cost barely more than 100
  • Total payload can be under 200KB including the WASM binary
  • You control every draw call

What's not great:

  • You're writing shaders by hand. Debugging them is painful.
  • web-sys bindings are verbose and sometimes awkward
  • No scene graph — you manage all transforms yourself
  • Iteration speed is significantly slower (compile, bind, test)
  • The talent pool for maintenance is much smaller

Side-by-Side: Where Each Wins

Factor React + Three.js Rust + WASM + WebGL
Time to prototype Hours/days Weeks
Performance ceiling Good (thousands) Excellent (tens of thousands)
Bundle size ~200KB+ gzipped ~100-200KB total
Developer experience Excellent Rough but improving
Ecosystem/plugins Massive Minimal
GC pauses Yes, noticeable at scale None
UI integration Native React Requires JS interop bridge
Hiring/maintainability Easy Niche

When to Go Framework-Free

After working with both, here's my honest take:

Use React + Three.js when:

  • Your scene has fewer than ~2,000 dynamic objects
  • You need UI overlays tightly integrated with 3D content
  • You're on a team and need maintainability
  • You want to ship fast and iterate

Consider Rust + WASM + raw WebGL when:

  • You're rendering thousands of real-time data points (like flight trackers)
  • Performance budgets are extremely tight (embedded devices, kiosks)
  • Bundle size matters significantly
  • You already know Rust and enjoy the control

The flight tracker is actually a perfect use case for the bare metal approach. It's a data-heavy visualization with thousands of moving objects, minimal UI complexity, and performance is the whole point.

A Note on What You Actually Need

One thing the framework-free movement gets right is questioning defaults. Do you actually need React for your project, or is it just habit? Do you need a full analytics suite, or would something lighter work?

I've been applying this same thinking to analytics on my own projects. Instead of reaching for Google Analytics out of habit, I've been looking at privacy-focused alternatives. Umami has become my go-to — it's self-hosted, dead simple, and fully GDPR compliant without cookie banners. Plausible is another solid option if you want a hosted service with a clean dashboard. Fathom takes a similar approach with a focus on simplicity.

Umami stands out to me because the self-hosted model means your data never leaves your server, and the interface is genuinely minimal — exactly the "only what you need" philosophy. Plausible offers both hosted and self-hosted and has a slightly richer feature set. Fathom is hosted-only but arguably has the most polished UX of the three.

Same principle as the framework debate: pick the level of abstraction that matches your actual needs, not the one that's most popular.

The Real Takeaway

The Reddit flight tracker isn't impressive because frameworks are bad. It's impressive because the developer matched the tool to the problem. A real-time globe with thousands of moving aircraft is genuinely one of those cases where the overhead of React's reconciliation and Three.js's scene graph becomes the bottleneck.

But if you're building a product configurator or an interactive landing page? Reach for Three.js. Seriously. Your time is worth more than the 3ms you'd save per frame.

The best stack is the one that solves your actual problem without creating three new ones. Sometimes that's a framework. Sometimes it's a compiler and some shaders. Know the tradeoffs, and choose accordingly.

Top comments (0)