DEV Community

Cover image for Raymarching: A Better Way to Explore NASA’s X-Ray Scans with ThreeJs
bandinopla
bandinopla

Posted on

Raymarching: A Better Way to Explore NASA’s X-Ray Scans with ThreeJs

Recently I experimented with NASA’s Astromaterials 3D virtual library and did this variation of it using ThreeJs, a collection that provides access to digital scans of Apollo samples, meteorites, and Antarctic rocks. Instead of just viewing static models, I wanted to explore their inner structures in real time, not just the way a geologist might examine a thin slice under a microscope.

To achieve this, I built a real-time volumetric data visualization tool using raymarching. This technique allowed me to render NASA’s X-ray computed tomography (CT) data directly into interactive 3D volumes. Instead of pre-baked meshes, the rocks are reconstructed on the fly, voxel by voxel, giving a much richer sense of depth and material composition.

One key feature I implemented was an intensity-based filtering system. With it, you can interactively isolate layers or internal structures based on density values. For example, you can peel away “less dense” material from a meteorite and reveal inclusions or harder structures hidden inside. This makes the exploration process feel less like looking at a 3D model and more like conducting an actual virtual dissection of the sample.

The result is a tool that not only visualizes NASA’s CT data but also encourages a more intuitive and exploratory way of studying planetary materials. While my project is just a personal take on their library, I think it highlights how modern real-time rendering techniques can make complex scientific datasets more accessible and engaging.

Now let’s talk about raymarching:

What is Ray Marching?

Raymarching is a way of drawing 3D scenes on a computer without using traditional 3D models (no polygons or vertices involved)

Instead of having meshes made of triangles, you imagine shooting a ray (like a straight line of sight) from the camera into the scene. The ray moves forward step by step (“marching”) and at each step it asks: am I inside an object yet? how close am I to the nearest surface?

This is usually done with distance functions, which can tell you how far you are from a shape (like a sphere or cube). The ray keeps marching until it gets close enough to a surface, and then you know that’s where the object is.

It’s like sonar or echolocation, but in reverse: instead of sound bouncing back, you walk forward in little steps until you bump into something. That’s how the computer figures out what’s visible in that direction.

But wait, How to combine X Ray slices with Ray Marching?

The process started with a simple setup in Three.js: a 1x1x1 cube mesh with a custom shader material. The cube serves as the container for the volumetric data, while the fragment shader handles the heavy lifting of the raymarching process.

Here’s how it works:

  • Camera direction & contact point — Using the camera’s direction and the UV coordinates provided when the fragment shader is triggered, I calculated the local position inside the cube where the ray enters.
  • Mapping slices to planes — NASA provides CT scan data as image slices across three planes: XZ, XY, and YZ. By projecting the local position of the ray step onto each plane, I could determine the corresponding UV for each slice atlas.
  • Advancing the ray — As the ray marched through the volume, I sampled these slice images. Black areas were ignored (empty space), while grey or white pixels indicated material density. When the ray hit a non-empty voxel, it stopped — effectively reconstructing the surface or internal feature of the rock.

This method transforms stacks of 2D CT slices into a real-time volumetric visualization. Instead of static meshes, the rock’s structure emerges dynamically, with the raymarching process guided by the X-ray data itself.
The end result is an interactive way to peer inside NASA’s astromaterials, layer by layer, revealing veins, inclusions, and hidden textures that would otherwise remain locked within the dataset.

Preview

How is the X Ray data packed & read?

NASA’s provides an atlas for each face of the bounding box cube that wraps the sample rocks: XY, XZ and YZ like this:

Atlas image view

And to read each frame in that atlas, you would write a function like this in GLSL shader code ( using a ShaderMaterial in theejs) Each frame in the atlas represents a layer. Think of it as cutting many tiny slices of a sweet apple pie…

vec3 sampleAtlas(sampler2D atlas, vec4 info, vec2 uv, float slicePercent) 
{ 
    // Ensure we're in [0, 1)
    float t = clamp(slicePercent, 0.0, 1.0);

    float totalFrames = info.w;
    float framesOnWidth = info.z;

    // Compute frame index - avoid int operations when possible
    float sliceIndexFloat = floor(t * totalFrames);
    sliceIndexFloat = min(sliceIndexFloat, totalFrames - 1.0);

    // Use mod for column calculation (GPU optimized)
    float column = mod(sliceIndexFloat, framesOnWidth);
    float row = floor(sliceIndexFloat / framesOnWidth);

    // Compute UV in atlas - vectorize operations
    vec2 frameOffset = vec2(column, row);
    vec2 sampleUV = vec2(
        info.x * (frameOffset.x + uv.x),
        1.0 - info.y * (frameOffset.y + uv.y)
    );

    return texture(atlas, sampleUV).rgb;
}
Enter fullscreen mode Exit fullscreen mode

Since the base mesh is a cube, the ray position at any given time, being a vector 3, can be used to obtain the UV in each face of the cube since we intentionally kept the cube at 1x1x1 units. So the position itself is the UV on that face.

  • Atlas XY would have each frame belonging to the Z axis.
  • Atlas XZ would have each frame belonging to the Y axis.
  • Atlas YZ would have each frame belonging to the X axis. this way we can easily know what polor to use at any given point of the ray.

Optimizing Raymarching with Early Exit Based on Dominant Axis

While building my real-time volumetric renderer for NASA’s CT scan data, one of the biggest challenges was performance. Raymarching is expensive by nature, especially when sampling multiple 2D atlases per step (XY, XZ, YZ). To keep things interactive in the browser, I experimented with ways to cut down unnecessary texture lookups.

The Problem

At every ray step, I needed to sample three different slice atlases (XY, XZ, YZ) to reconstruct the density at that position. This was accurate, but costly , especially when rays were aligned close to one axis. In those cases, many of the extra samples contributed little to the final result.

The Optimization

The trick was to look at the dominant axis of the ray direction. If the ray is traveling almost entirely along one axis, then the majority of meaningful data will come from the corresponding atlas.

So instead of always sampling all three, I added an early exit:

Check which axis has the highest weight (the strongest contribution).
If one axis dominates (greater than 0.8 in my case), only sample that atlas.
Otherwise, fall back to sampling all three.
This way, rays aligned with X, Y, or Z take a big shortcut, while oblique angles still get the full accuracy.

The Result

By cutting down the number of texture lookups in these dominant cases, the shader runs significantly faster without noticeable visual loss. It’s a simple heuristic, but it has a big payoff: many rays in typical viewing angles align with a major axis, so this optimization kicks in often.

In other words, don’t do the work if the math tells you it won’t matter much.

vec3 sampleVolumeOptimized(vec3 uvs, vec3 weight) {
    // Find dominant axis to reduce texture samples
    float maxWeight = max(weight.x, max(weight.y, weight.z));

    // If one axis dominates heavily, only sample that texture
    if (maxWeight > 0.8) {
        if (weight.z > 0.8) {
            return sampleAtlas(xyAtlas, xyInfo, uvs.xy, uvs.z);
        } else if (weight.y > 0.8) {
            return sampleAtlas(xzAtlas, xzInfo, vec2(uvs.x, 1.0-uvs.z), uvs.y);
        } else if (weight.x > 0.8) {
            return sampleAtlas(yzAtlas, yzInfo, vec2(uvs.y, 1.0-uvs.z), uvs.x);
        }
    }

    // For oblique angles, sample all three (fallback)
    vec3 xyPixel = sampleAtlas(xyAtlas, xyInfo, uvs.xy, uvs.z);
    vec3 xzPixel = sampleAtlas(xzAtlas, xzInfo, vec2(uvs.x, 1.0-uvs.z), uvs.y);
    vec3 yzPixel = sampleAtlas(yzAtlas, yzInfo, vec2(uvs.y, 1.0-uvs.z), uvs.x);


    return xyPixel + xzPixel * weight.y + yzPixel * weight.x;
}
Enter fullscreen mode Exit fullscreen mode

This optimization is just one of many small tricks that make real-time raymarching practical in the browser. By letting the math guide where to spend GPU effort , and where to cut corners, it’s possible to turn heavy scientific datasets like NASA’s CT scans into smooth, interactive visualizations. It’s a reminder that performance often comes not from doing more, but from knowing when you can safely do less.

Top comments (0)