DEV Community

Abraham Thomas
Abraham Thomas

Posted on

From Zero to .glb: Building a Serverless 3D Asset Pipeline in with Rust and Cloudflare

Just 48 hours ago, Zayry was an idea documented in a README.md. Tonight, it's a functional, end-to-end, serverless pipeline capable of converting .obj models to .glb files.

This is the build in public philosophy in action. I want to document the journey as it happens—the architectural choices, the technical challenges, and the breakthroughs. This post is a deep-dive into the whirlwind of the last two days.

Disclaimer: This is a Day 2 proof-of-concept. It is NOT production-ready. There are bugs, missing features, and unhandled edge cases. But the foundation is solid, and the core data flow is working.


The Architecture: A Bet on the Edge

The goal is to build a high-performance, low-latency 3D asset pipeline. Every architectural choice was made to serve that goal, on a bootstrapped budget.

The entire platform runs on Cloudflare, and the components are:

  • API Ingestion: A Cloudflare Worker written in TypeScript acts as our secure API gateway. It lives on the edge, making it incredibly fast for developers anywhere in the world. Its only job is to validate, authenticate, and queue.
  • Asset Storage: Cloudflare R2 is used for storing both the source .obj files and the final .glb assets. The zero egress fees are a game-changer for a service that will deliver data globally.
  • Job Decoupling: Cloudflare Queues provide the critical link between the API and the processing engine. This ensures that even if there's a huge spike in uploads, the system remains resilient and no jobs are lost.
  • State Management: Cloudflare D1, a serverless SQLite database, tracks the status of every asset as it moves through the pipeline.
  • The Core Engine: A high-performance 3D processing engine written in Rust and compiled to WebAssembly (WASM). This runs inside a separate Cloudflare Worker, giving us near-native performance in a serverless environment.

The flow is simple but powerful:

API Worker -> R2 (Source) -> Queue -> Processor Worker (with WASM) -> R2 (Processed) -> D1 (Status Update)


The Journey: From Scaffolding to Synthesis

With the architecture defined, the last 24 hours have been a blur of implementation.

The Plumbing

First, I built the skeleton. This involved scaffolding the monorepo and setting up the TypeScript workers for the API and the processor. I implemented bearer token authentication (using wrangler secrets, no hardcoded keys!) and request validation on the API worker.

The first major milestone was getting the full data flow working with a placeholder. I could upload a file, see it land in R2, watch a job appear in the D1 database, and see the processor consume it from the queue. The pipes were connected.

Forging the Engine in Rust

Next, it was time to replace the placeholder with the real engine. I chose Rust for its performance, safety, and incredible WASM support.

  • Parsing the Input: The first step was simply reading the .obj file. I pulled in the excellent tobj crate, which made parsing the model's vertices, normals, and faces surprisingly straightforward.
  • Synthesizing the Output: This was the most challenging and rewarding part. To create a .glb file, I used the standard gltf crate. The core logic involves a meticulous mapping process:
    • Packing all the mesh data from tobj into a single binary buffer.
    • Creating glTF "views" and "accessors" that tell the renderer how to interpret that binary data.
    • Assembling everything into a valid glTF scene structure.
    • Finally, serializing the JSON structure and the binary buffer into the .glb format.
  • WASM Integration: With the conversion function written in Rust, I used wasm-pack to compile it to a lightweight WASM module. I then configured the processor worker to load this module, and a single line of TypeScript was all it took to call the Rust function from the JavaScript world.

The moment I uploaded the first .obj and saw a valid .glb file appear in the processed R2 bucket was a massive breakthrough.


What's Next?

This is just the beginning. The immediate next steps are:

  • Adding support for the more complex .fbx format.
  • Handling materials and textures.
  • Implementing automated LOD generation.
  • Building out robust error handling.

The roadmap is long, but the riskiest technical challenges have been overcome. We have a working engine.


Join the Journey

Thank you for reading. If you're interested in solving the problems of 3D development, I'd love for you to join the journey.

Let's keep building.

Top comments (0)