DEV Community

Cover image for How We Animated GraphCast’s Global Weather Predictions in Real-Time on an iPhone Using Cesium & WebGL Shaders
Ryousuke Wayama
Ryousuke Wayama

Posted on

How We Animated GraphCast’s Global Weather Predictions in Real-Time on an iPhone Using Cesium & WebGL Shaders

Imagine taking Google DeepMind’s GraphCast—a state-of-the-art AI weather model—and visualizing its global, high-resolution predictions smoothly on a mobile browser. Sounds like a fast track to melting your iPhone's GPU, right?

Usually, processing and rendering massive multi-dimensional spatial data (like global wind patterns, temperature, and pressure across multiple time steps) requires hefty desktop hardware. When you try to render that kind of heavy data onto a 3D web globe on a mobile device, you typically hit a massive wall: memory limits, terrible framerates, and inevitable browser crashes.

But at the R&D department of Northern System Service—where we regularly build robust GIS systems and visualize spatial data for Japanese government agencies like JAXA and JAMSTEC—we love pushing browser limits.

We didn't want a clunky, pre-rendered video. We wanted real-time, interactive 3D rendering on a smartphone. By rethinking our data pipeline and offloading the heavy lifting to the GPU using Cesium and custom WebGL Shaders (GLSL), we managed to render GraphCast's global predictions natively in the mobile browser at a silky-smooth frame rate.

Here is how we hacked the pipeline, bypassed the CPU bottlenecks, and turned a mobile phone into a real-time global weather simulator.


The "Why": Translating Weather Jargon into 3D Reality

You always hear meteorologists talk about the "eye of a typhoon" or a "pressure trough." But let’s be honest—looking at standard 2D contour lines means almost nothing to the average person.

I thought: What if we could visualize barometric pressure as physical 3D terrain and animate it? If a pressure drop actually looked like a massive crater or a deep valley on a 3D globe, anyone could instantly and intuitively grasp the weather dynamics.

Attempt #1: Cesium Terrain (And The Bottleneck)

My first naive approach was to use Cesium's native Terrain provider. It’s fantastic for rendering static mountains, but animating a dynamic, global, high-resolution 3D mesh across multiple time-steps (using GraphCast's time-series predictions) was a disaster.

The CPU was completely overwhelmed trying to update the geometry frame-by-frame. The browser choked, and achieving a smooth, real-time animation—especially on a mobile device like an iPhone—was completely out of the question. I needed a radically different approach.

The Pivot: Custom WebGL Shaders to the Rescue

If the CPU can't handle updating the 3D geometry, why not force the GPU to do all the heavy lifting? I pivoted to writing custom WebGL shaders.

The Hack: Baking Pixel Indices into a Custom glTF

GraphCast outputs massive global grid data. To animate the globe based on this data, every single point on our 3D model needed to know exactly which pixel to read from the texture.

Instead of doing expensive lookups on the fly, I built a Python pipeline using GDAL, pymap3d, and pygltflib to generate a custom glTF model. I converted WGS84 coordinates into Earth-Centered, Earth-Fixed (ECEF) coordinates.

But here is the secret sauce: I injected the GeoTIFF pixel index directly into the glTF vertex attributes.

import pymap3d as pm
import pygltflib
import numpy as np

def xyz_to_ecef(lon, lat, alt):
    # WGS84 to ECEF conversion
    x, y, z = pm.geodetic2ecef(lat, lon, alt)
    return x, y, z

# ... generating coordinates ...

# Create the custom glTF with baked attributes
gltf = pygltflib.GLTF2(
    # ...
    accessors=[
        pygltflib.Accessor(
            bufferView=0,
            componentType=pygltflib.FLOAT,
            count=len(points), # 3D Model Vertex Coordinates (ECEF)
            type=pygltflib.VEC3,
            max=points.max(axis=0).tolist(),
            min=points.min(axis=0).tolist(),
        ),
        pygltflib.Accessor(
            bufferView=1,
            componentType=pygltflib.FLOAT,
            count=len(lonlat_list), # WGS84 Coordinates
            type=pygltflib.VEC2,
            max=lonlat_list.max(axis=0).tolist(),
            min=lonlat_list.min(axis=0).tolist(),
        ),
        pygltflib.Accessor(
            bufferView=2,
            componentType=pygltflib.UNSIGNED_INT,
            count=len(idx_list), # The magical GeoTIFF pixel index!
            type=pygltflib.VEC2,
            max=idx_list.max(axis=0).tolist(),
            min=idx_list.min(axis=0).tolist(),
        ),
    ],
    # ...
)

Enter fullscreen mode Exit fullscreen mode

Displacement Mapping on the GPU

With the index baked into the geometry, the GPU does the rest. I wrote a custom Vertex Shader based on Cesium's Point Cloud architecture.

For every frame, the shader grabs the baked index (vsInput.attributes.idx), calculates the exact UV position, fetches the barometric pressure from the GeoTIFF texture, calculates the new height, and updates the ECEF coordinates instantly.

void vertexMain(VertexInput vsInput, inout czm_modelVertexOutput vsOutput)
{
    // Retrieve WGS84 coords and GeoTIFF index from glTF attributes
    vec2 lonlat = vsInput.attributes.longlat; 
    vec2 geotiff_idx = vsInput.attributes.idx;

    // Calculate UV position (Global grid is 1440x721)
    vec2 uv_pos = vec2(float(geotiff_idx.x) / 1439.0, float(geotiff_idx.y) / 719.0);

    // Fetch the pressure value from the GeoTIFF texture
    float zval = get_tex_value(uv_pos); 

    // Calculate new Z height based on pressure
    float newz = 1000.0 + ((zval - zmin) * heightBuf);
    vec3 newXyz = geodeticToECEF(lonlat.x, lonlat.y, newz); 

    // Update vertex position
    vsOutput.positionMC.x = newXyz.x; 
    vsOutput.positionMC.y = newXyz.y;
    vsOutput.positionMC.z = newXyz.z;
    vsOutput.pointSize = 3.0;
}

Enter fullscreen mode Exit fullscreen mode

To make it visually intuitive, I wrote a Fragment Shader to colorize the points dynamically. By passing the same texture, we calculate a topographical color map right on the GPU.

vec3 get_cs_map(vec2 uv_pos) {
    // Topographical color calculations based on surrounding pixels
    // ...
    return cs_map * dem_color * contour;
}

void fragmentMain(FragmentInput fsInput, inout czm_modelMaterial material)
{
    vec2 geotiff_idx = fsInput.attributes.idx;
    vec2 uv_pos = vec2(float(geotiff_idx.x) / 1439.0, float(geotiff_idx.y) / 719.0);

    // Apply dynamic topographical coloring
    material.diffuse = get_cs_map(uv_pos); 
}

Enter fullscreen mode Exit fullscreen mode

Overcoming the Mobile OOM Beast

Everything was looking great until I tried to load the time-series data. GraphCast gave me a 40-band GeoTIFF (representing 10 days at 6-hour intervals).

I tried to load the entire multi-band GeoTIFF at once using geotiff.js in the browser. Immediate crash. The mobile browser threw an Out-of-Memory (OOM) error.

The workaround? I split the 40-band beast into 40 individual single-band GeoTIFFs. By pre-loading these lightweight images sequentially into the browser memory, the crashes stopped entirely. When the user scrubs the timeline slider, the shader simply swaps out which pre-loaded texture it samples from. It runs flawlessly.

Leveling Up: From Point Cloud to Solid Mesh

While the animated point cloud was incredibly fast, zooming in revealed the gaps between points. I wanted a solid surface.

zooming in revealed the gaps between points

Could this same shader magic work on a solid mesh instead of just points? Absolutely.

I went back to Python and used the open3d library to reconstruct a continuous mesh from the point cloud using the Ball Pivoting Algorithm. I then appended these triangle indices to my custom glTF.

import open3d as o3d

# Estimate normals for the point cloud
ptCloud = o3d.geometry.PointCloud()
ptCloud.points = o3d.utility.Vector3dVector(points_ecef_mesh)
ptCloud.estimate_normals()
ptCloud.orient_normals_consistent_tangent_plane(100)

# Reconstruct the mesh using Ball Pivoting Algorithm
distances = ptCloud.compute_nearest_neighbor_distance()
avg_dist = np.mean(distances)
radius = 2 * avg_dist   
radii = [radius, radius * 2]

recMeshBPA = o3d.geometry.TriangleMesh.create_from_point_cloud_ball_pivoting(
    ptCloud, o3d.utility.DoubleVector(radii))

# Add triangle polygons to the glTF
triangles = np.array(recMeshBPA.triangles, dtype=np.uint32) 

Enter fullscreen mode Exit fullscreen mode

3D point cloud
3D point cloud

reconstructed mesh
reconstructed mesh

By swapping the point cloud glTF for this new mesh glTF—without changing a single line of the custom shader—the result was a fully dynamic, solid, breathing 3D globe. You can zoom right into the "eye" of a typhoon and see the pressure gradients as a smooth, colored crater.


When I started this project, I fully expected my iPhone to overheat enough to baked my morning eggs on the screen. But as it turns out, the only thing we ended up baking were the GeoTIFF indices directly into the glTF attributes.

My phone stayed perfectly cool, completely ruining my breakfast plans—but we got a buttery-smooth 60 FPS instead. I’d say that’s a fair trade.

Yes, this is running buttery-smooth on a 2022 iPhone SE. Too old, you say? Hey, give me a break—it’s the absolute perfect size for watching YouTube in bed!
(Yes, this is running buttery-smooth on a 2022 iPhone SE. Too old, you say? Hey, give me a break—it’s the absolute perfect size for watching YouTube in bed!)


Conclusion: Pushing the WebGL Envelope

Visualizing complex, global-scale AI predictions like GraphCast doesn't have to melt your CPU. By rethinking the data pipeline and offloading the topological calculations to a custom WebGL shader, we turned a sluggish, memory-crashing process into an interactive 3D globe running natively on an iPhone.

This shader hack isn't just limited to weather forecasting. This architecture can be applied to almost any massive spatial dataset, such as deep-sea oceanography data or real-time disaster prevention monitoring.

Hungry to try it out yourself?
Don't worry, the complete "breakfast recipe" we just discussed is waiting for you right here:

👉 breakfast recipe

Grab your smartphone, and let's bake and eat a global 3D model together! Bon appétit!


Looking for a Team to Solve Your Spatial Data Nightmares? Let’s Talk.

I am a part of the R&D team at Northern System Service, based in Japan. We specialize in solving "impossible" spatial data and 3D visualization problems.

We build robust systems for national-level projects. Our track record includes developing 3D data explorers for JAXA (visualizing asteroid data from the Hayabusa2 space probe) and advanced oceanographic visualization systems for JAMSTEC. Furthermore, our R&D pipeline is highly augmented—we actively utilize AI agents (like Claude Code) for autonomous development to deploy complex WebGL/Cesium solutions faster.

We are currently expanding our reach and taking on global contracting projects. If your team is struggling with WebGL performance bottlenecks, needs to visualize massive AI-generated spatial datasets, or wants to build next-generation 3D globes using Cesium, we can help.

👉 Reach out to us for consulting or contract development: Northern system service Co.,Ltd. R&D contact

👉 Follow me for more WebGL/GIS hacks: [
X / GitHub ]

Let’s build something visually mind-blowing together.


Top comments (0)