This article was originally published on Valerius Petrini's Programming Blog on 08/18/2025
In OpenGL
, you're bound to come across normals, which are vectors that define how light reflects off of an object. Normals are usually defined per-vertex or per-face in 3D models, but normal maps are used to simulate small details on surfaces by giving each pixel a normal vector. While normal maps are commonly used in graphics applications like OpenGL
, the details behind how it works is often hidden behind shader logic that transforms the normal vectors from texture space to the coordinate space for lighting calculations. This article aims to explain what normals are, how they are calculated, and how normal maps are generated.
What is a normal vector?
Normal vectors are essentially vectors that are perpendicular to the surface of an object at a specific point. If you had a flat plane resting on the xy plane, the normal vector would be pointing up in the positive z direction.
Although is considered "up" in
OpenGL
, for simplicity’s sake we will use as up.
Normals in OpenGL
are used to define how light reflects off of an object. The direction of the normal is used to calculate direction that the light travels after bouncing off a surface. Normals are useful in OpenGL
because they allow us to create a sort of fake depth to objects without changing their geometry by adding new vertices. Take a brick wall for example; instead of extruding the mortar sections of the brick wall, we can just define the normal values for each pixel.
Calculating a 2D Normal
Normal vectors in 2D space are incredibly easy to find. Let's take this function and its derivative:
If you recall from calculus, the derivative allows us to get the line tangent to a function. In order to get the normal slope ( ), we can get the negative reciprocal of our derivative.
After we have the slope for our point, we can write the normal vector as
The is just a simple way to represent the -component, while gives the corresponding -component based on the slope. We then normalize the vector to make it a unit normal. The direction of the vector encodes the orientation of the surface at that point, while its length doesn’t matter once it’s normalized.
Alternatively, you can also write the vector as
(rotating by 90°).
In Graphics Programming, "Normalizing" a vector means making its magnitude 1. You can do that by finding the magnitude of the vector ( for 2D, for 3D) and dividing each component of the vector by that magnitude.
For example, the normalized vector of would be
For example, at :
And then normalizing it:
Calculating a 3D Normal
When we go into three dimensions, calculating the normal gets a bit more challenging, as now we have two separate variables, and . Because of this, the slope isn't going in one direction per point, so we need to use partial derivatives to solve for them. Like a vector, we need to split the slope into the and direction.
Now we have two tangent lines representing the slopes of both the and directions. To find the normal, we can convert these two tangent lines into vectors and find the cross product of those two values.
Additionally, recall that there are two possible vectors that can be perpendicular to and : and . For this example we will use the first one as it points upward (positive ), but the second can also be used based on use case. The reason we use 1 in the vectors is just to provide a reference in the or direction so the tangent vector has a defined direction.
In fact, we can use this fact by using another method. Instead of calculating the cross product, we can instead calculate the gradient. We can first rewrite as an implicit function.
The reason we can use gradients for this is that we convert our function into an implicit function that is a level surface. The surface is only made up of points that satisfy . Moving along the surface does not change , so any vector tangent to the surface produces zero change in . By definition, the gradient of points in the direction of the greatest change, which is perpendicular to all the tangent vectors, making it the normal vector.
That normal vector points downwards, so to make it go up we can multiply by negative one.
In that case, when we have two partial derivatives for and , we can write an equivalence:
After that, you should normalize the vector as usual to get the unit vector.
Creating Normal Maps
When creating normal maps for textures in video games, it usually begins with a heightmap, a grayscale image where white (#FFF
) corresponds to the highest point of that texture, and black (#000
) corresponds to the lowest point. Take this brick wall found from freepbr.com:
The heightmap of this texture would be this:
Here, the mortar sections of the wall appear more inset than the bricks themselves.
We need the heightmap so we can find the normal using the gradient at each point, we can convert those into RGB values and create a normal map.
In spite of this, we run into an issue very quickly. In our examples, we always had continuous functions like and , but our heightmap doesn't have a specific function we can use.
To circumvent this, we can get the "local" derivative by using our definition of a derivative:
Since we can't have be 0 in our code, we need to use a small number for and approximate the derivative using the central difference approximation:
We use 1 for as we are using the next and previous pixel value for our calculations.
In GLSL, that would look something like this:
// Assume 'heightMap' is a sampler2D containing the grayscale heightmap
// 'texCoords' are the texture coordinates for the current fragment
// 'texelSize' is vec2(1.0/width, 1.0/height) of the texture
vec3 computeNormal(vec2 texCoords) {
// This won't work for pixels on the edge of the screen.
// In that case, you'll want to use forward and backward difference approximations.
// Additionally, as our texture is grayscale, r, g, and b
// have the same value, so we can get any one of them
float hL = texture(heightMap, texCoords - vec2(texelSize.x, 0.0)).r;
float hR = texture(heightMap, texCoords + vec2(texelSize.x, 0.0)).r;
float hD = texture(heightMap, texCoords - vec2(0.0, texelSize.y)).r;
float hU = texture(heightMap, texCoords + vec2(0.0, texelSize.y)).r;
// Central differences
// You can multiply these by a height scale factor to exaggerate the bumps
float dX = (hR - hL) * 0.5;
float dY = (hU - hD) * 0.5;
// Construct normal vector
vec3 normal = normalize(vec3(-dX, -dY, 1.0));
// Map from [-1, 1] to [0, 1] for RGB
// GLSL expects a value from 0 to 1 for RGB values,
// but if you're exporting to an image, you need to
// multiply by 255 in order to get RGB values from 0 to 255
return normal * 0.5 + 0.5;
}
If we did write to an image, it would look something like this:
Final Thoughts
When our normal map is done, we can finally use it in our code. If you want to experiment more, try implementing forward and backward difference approximations in the GLSL code.
Happy Coding!
Top comments (0)