One of the biggest limitations that the engine has right now is that it can only generate geometry with algorithms. However, to be more useful we'd like to be able to load custom-made meshes made by artists. There's a few different formats that do different things but .obj
is probably the simplest and most widely used. This was a format created by Wavefront and has become a defacto standard for simple meshes. It's easy enough to write a loader for it and that's what we'll be focusing on in this post. Note that the loader won't handle all features of OBJ because it gets into 3D Splines and such that our engine just doesn't deal with but we'll get positions, normals and UVs which should still allow a large amount of .obj
files.
Create a loader
.obj
files are text files where each line represents a piece of data, usually a point of some sort. Each line start with a short code that describes what the point is.
Example
v -3.000000 1.800000 0.000000
For this we'll be concerned with positions, normals and UVs.
- vertex position =
v
- vertex UV =
vt
- vertex normal =
vn
Followed are a couple of space delimited values. These could be variable in length, but we'll assume they are 3d and there will be 3 points. This will give us a "pool" of points. This will then be followed by instructions to construct faces f
. This is very similar to how the index-buffer works.
f 2909 2921 2939
This basically says "construct a triangle" with vertices 2909, 2921 and 2939. Be careful here, face indices are 1-indexed! This means that the first vertex is 1 not 0 so we'll wind up subtracting 1 from them. You can also match up specific types of vertices.
f 1/2/2 2/3/1 3/4/2
This means vertex 1 is a combination of position 1, UV 2, normal 2. vertex 2 is a combination of position 2, UV 3, normal 1 etc. If not specified in this format, it uses the same index for all parts. Primitives beyond triangles (e.g. quads) are allowed but our engine only supports triangles so we can assume they will be 3 points. For simplicity, so that we can use our existing data format, we'll ignore these multi-vertex formats.
Here's what I came up with:
export function loadObj(txt){
const positions = [];
const normals = [];
const uvs = [];
const triangles = [];
const colors = [];
let v = 0;
const lines = txt.split("\n");
for(const line of lines){
const normalizedLine = line.trim();
if(!normalizedLine || normalizedLine.startsWith("#")) continue;
const parts = normalizedLine.split(/\s+/g);
const values = parts.slice(1).map(x => parseFloat(x));
switch(parts[0]){
case "v": {
positions.push(...values);
break;
}
case "c": { //custom extension
colors.push(...values);
break;
}
case "vt": {
uvs.push(...values);
break;
}
case "vn": {
normals.push(...values);
break;
}
case "f": {
triangles.push(...values.map(x => x - 1));
break;
}
}
}
return {
positions,
uvs,
normals,
triangles,
colors
};
}
You'll notice here that I added a new entity type c
. This is not part of the .obj
specification! This is because playing around with simple coloring can be useful for debugging especially when you are trying to figure out vertex ordering. Any files you create with c
may not be readable by other .obj
readers. We also can throw out any lines starting with #
as those are comments.
I also created a small helper function which does some of the normal fetch plumbing (this could perhaps be extended to include images/shaders/loading meshes directly):
export async function loadUrl(url, type = "text"){
const res = await fetch(url);
switch(type){
case "text": return res.text();
case "blob": return res.blob();
case "arrayBuffer": return res.arrayBuffer();
}
}
Simple pyramid
Note from here on I will be adding some scaling and translation to make the shapes visible:
mesh
.setScale({ x: 0.5, y: 0.5, z: 0.5})
.setTranslation({ y: -0.5 })
For our first test let's try something a little more human readable, a pyramid:
v 0 0 0
v 1 0 0
v 1 1 0
v 0 1 0
v 0.5 0.5 1.6
#custom extension for debugging!
c 1 0 0
c 0 1 0
c 0 0 1
c 1 1 0
c 0 1 1
f 5 2 3
f 4 5 3
f 6 3 2
f 5 6 2
f 4 6 5
f 6 4 3
I found this simple example here https://people.sc.fsu.edu/~jburkardt/data/obj/pyramid.obj and extended it with color so it's easier to visually debug.
Also for debugging, I also used a simple shader that uses colors so we don't need to worry about lighting:
//color.vert.glsl
uniform mat4 uProjectionMatrix;
uniform mat4 uModelMatrix;
uniform mat4 uViewMatrix;
attribute vec3 aVertexPosition;
attribute vec3 aVertexColor;
varying mediump vec4 vColor;
void main() {
gl_Position = uProjectionMatrix * uViewMatrix * uModelMatrix * vec4(aVertexPosition, 1.0);
vColor = vec4(aVertexColor, 1.0);
}
//color.frag.glsl
varying mediump vec4 vColor;
void main() {
gl_FragColor = vColor;
}
And our pyramid:
Simple cube
For our second test let's get a little more complicated.
v -0.5 -0.5 -0.5
v 0.5 -0.5 -0.5
v 0.5 0.5 -0.5
v -0.5 0.5 -0.5
v 0.5 -0.5 -0.5
v 0.5 -0.5 0.5
v 0.5 0.5 0.5
v 0.5 0.5 -0.5
v 0.5 -0.5 0.5
v -0.5 -0.5 0.5
v -0.5 0.5 0.5
v 0.5 0.5 0.5
v -0.5 -0.5 0.5
v -0.5 -0.5 -0.5
v -0.5 0.5 -0.5
v -0.5 0.5 0.5
v -0.5 0.5 -0.5
v 0.5 0.5 -0.5
v 0.5 0.5 0.5
v -0.5 0.5 0.5
v -0.5 -0.5 0.5
v 0.5 -0.5 0.5
v 0.5 -0.5 -0.5
v -0.5 -0.5 -0.5
#custom color extensions for debugging
c 1 0 0
c 1 0 0
c 1 0 0
c 1 0 0
c 0 1 0
c 0 1 0
c 0 1 0
c 0 1 0
c 0 0 1
c 0 0 1
c 0 0 1
c 0 0 1
c 1 1 0
c 1 1 0
c 1 1 0
c 1 1 0
c 0 1 1
c 0 1 1
c 0 1 1
c 0 1 1
c 1 0 1
c 1 0 1
c 1 0 1
c 1 0 1
vt 0 0
vt 1 0
vt 1 1
vt 0 1
vt 0 0
vt 1 0
vt 1 1
vt 0 1
vt 0 0
vt 1 0
vt 1 1
vt 0 1
vt 0 0
vt 1 0
vt 1 1
vt 0 1
vt 0 0
vt 1 0
vt 1 1
vt 0 1
vt 0 0
vt 1 0
vt 1 1
vt 0 1
vn 0.0 0.0 -1.0
vn 0.0 0.0 -1.0
vn 0.0 0.0 -1.0
vn 0.0 0.0 -1.0
vn 1.0 0.0 0.0
vn 1.0 0.0 0.0
vn 1.0 0.0 0.0
vn 1.0 0.0 0.0
vn 0.0 0.0 1.0
vn 0.0 0.0 1.0
vn 0.0 0.0 1.0
vn 0.0 0.0 1.0
vn -1.0 0.0 0.0
vn -1.0 0.0 0.0
vn -1.0 0.0 0.0
vn -1.0 0.0 0.0
vn 0.0 1.0 0.0
vn 0.0 1.0 0.0
vn 0.0 1.0 0.0
vn 0.0 1.0 0.0
vn 0.0 -1.0 0.0
vn 0.0 -1.0 0.0
vn 0.0 -1.0 0.0
vn 0.0 -1.0 0.0
f 1 2 3
f 1 3 4
f 5 6 7
f 5 7 8
f 9 10 11
f 9 11 12
f 13 14 15
f 13 15 16
f 17 18 19
f 17 19 20
f 21 22 23
f 21 23 24
This is a direct translation of the cube from data.js
(with +1
added to the indices). Note that we could have made this slightly more compact. If you remember we have to duplicate points on hard-edged figures like cubes as they have different normals in each face. The f
command lets us mix normals and UVs with positions as explained above. To do this we'd have to generate all those implied points. We won't be doing that today.
This produces the output:
The teapot
Let's use a more complicated mesh. In typically fashion for 3D tutorials and testing I'm going to use the Utah Teapot. This is because it's complex and yet everyone else uses it so it's easy to compare implementations to see if they look the same. The version I found was here: https://raw.githubusercontent.com/jaz303/utah-teapot/master/teapot.obj
I've added a parameter to the object loader that allows us to add a color to each vertex since this mesh has way more vertices than we can hand edit colors for:
export function loadObj(txt, color){
//...
switch(parts[0]){
case "v": {
positions.push(...values);
colors.push(...color);
break;
}
case "c": { //custom extension
if(!color){
colors.push(...values);
}
break;
}
//...
}
If color is specified it overrides the extended color values.
Now if we draw a red teapot:
Nice!
Although when we try to apply pixel shading:
This is because the file contains no normals. Using advanced geometry processing would could maybe recover them but for now we'd just have to find a better file I guess.
I did find another example that thankfully didn't use complex vertex form:
https://people.sc.fsu.edu/~jburkardt/data/obj/teapot.obj
One aspect is also troubling:
We can see through it at some angles. We can fix this by turning backface-culling off, but that's not really a good fix. The problem is that the vertex winding order is not right. Here I found a sort of inconsistency in my test meshes, the order in the faces is supposed to be counter-clock-wise, but sometimes it was clock-wise. The teapot we're rendering here needs the vertices reversed. But the teapot with normals did not.
I made a small update to the obj-loader
to let the user specify:
//reverseWinding is a boolean parameter
case "f": {
const oneBasedIndicies = values.map(x => x - 1);
triangles.push(
...(reverseWinding ? oneBasedIndicies.reverse() : oneBasedIndicies )
);
break;
}
With that things look fixed:
However if we look down there's still some issues if you look from the top.
These appear to just be a defect in the model, at least it makes intuitive sense why these wouldn't show correctly. Again, turning off backface culling can fix this.
Finally using a pixel-shaded lighting on the teapot with normals we get:
This seems good enough.
Diversion #1: normalizing a mesh
As I was testing different .obj
files I found that they are all over the place in terms of size which required a lot of transforming to get right. To make this process easier I created a new method in Mesh
that scales it down to a 1x1x1 unit volume.
normalizePositions(){
let max = -Infinity;
for(let i = 0; i < this.#positions.length
const x = this.#positions[i];
const y = this.#positions[i + 1];
const z = this.#positions[i + 2];
if(x > max){
max = x;
}
if (y > max) {
max = y;
}
if (z > max) {
max = z;
}
}
for(let i = 0; i < this.#positions.length
this.#positions[i] /= max;
}
return this;
}
All we do is find the maximum length and then scale everything else relative to it. This means we can instantly get a model that is zoomed in correctly. We could choose to apply this to the model matrix instead of the positions themselves like I did. I'm not sure if that would be better, but this works.
Diversion 2: Adding a screen capture button
Before I was just getting screen grabs which were unevenly cropped. To make things a little better let's add a screen capture button.
I just create a new button and wire it up to an event handler. Now, naively we'd expect that we could just do canvas.toDataURL()
and download that using the a
tag trick eg:
export function downloadUrl(url, fileName) {
const link = document.createElement("a");
link.href = url;
link.download = fileName;
link.click();
}
The problem is this will likely result in an empty image. It'll have the exact same dimensions but it'll be entirely empty. The reason for this is because the toDataURL
happens in a different event. The canvas buffer is cleared before that event is run. So we have two options: We could use preserveDrawingBuffer = true
when creating the canvas context to preserve it. However, I think this might have unintended consequences if we later try something fancy with the buffer. Instead we just need to move the toDataUrl
to within the draw loop, that is, the render
method.
Our button handler just becomes:
this.#shouldCapture = true;
And then at the very bottom of the render method:
if(this.#shouldCapture){
this.#shouldCapture = false;
downloadUrl(this.dom.canvas.toDataURL(), "capture.png");
}
This just sets a flag so that the next time we render we capture at the end of the render sequence and reset the flag.
Final code for this post can be found here: https://github.com/ndesmic/geogl/tree/v9
Top comments (2)
Hey! Thanks for this amazing series! Any plans on doing something related to loading animated models?
Eventually, someday. I'd love to build up to GLTF support.