DEV Community

Cover image for Ray Tracing in Game Development
Fatih Küçükkarakurt
Fatih Küçükkarakurt

Posted on

Ray Tracing in Game Development

In this article we discuss the topic of ray tracing in computer graphics. Ray tracing has a very important role in computer graphics, even in video games to some extent, which we discuss throughout this article and guide. The purpose of this article is to not only talk about ray tracing but to implement a simple CPU ray tracer using C++. Having an understanding of game mathematical topics such as vectors, rays, primitives, and geometry makes it possible to create a simple ray tracer with relatively little code.

This article discusses the various types of ray tracing algorithms as well as real-time ray tracing efforts that are taking place today. A few years ago real-time ray tracing was not a commercially reasonable possibility, but today it is a reality, although it still has a ways to go before the industry can use it for the entire rendering solution of a game.

The purpose of this article is to provide experience with a hands-on simple CPU ray tracer. Later in this guide we will use the underlying ideas behind ray tracing to implement advanced effects that are seen in modern video games such as precomputed lighting and shadowing effects.

Ray Tracing In Computer Graphics

One of the oldest techniques used for generating rendered images on the CPU is ray tracing. Ray tracing is used in many different entertainment industries and has been for a number of decades now. One of the most well-known uses of ray tracing–based algorithms is in animated movies such as Teenage Mutant Ninja Turtles

Ray tracing is based on the idea that you can perform ray intersection tests on all objects within a scene to generate an image. Every time a ray hits an object the point of intersection is determined. If that point of intersection falls within the view volume of the virtual observer, then the pixels on the screen image that is being rendered are colored based on the object’s material information (e.g., color, texture, etc.) and the scene’s properties (e.g., fog, lighting, etc.).

An example of this is shown below.

raytracing

Forward Ray Tracing

Different types of ray tracing are used in computer graphics. In the first type of ray tracing that will be discussed, known as forward ray tracing, mathematical rays are traced from a light source into a 3D scene. When a ray intersects an object that falls within the viewing volume, the pixel that corresponds to that point of intersection is colored appropriately. This method of ray tracing is a simple attempt at simulating light rays that strike surfaces in nature, although far more simplistic and general than anything seen in the real world.

Tracing rays in this sense means starting rays at some origin, which in the case of forward ray tracing would be the light position, and pointing those rays outward into a scene in different directions. The more rays that are used, the better are the chances of them hitting objects that fall within the viewing volume.

A simplistic example of this is shown below.

raytracing2

To create a simple forward ray tracer, for example, one that traces spheres, one generally requires the following steps:

  1. Define the scene’s spheres and light sources.
  2. For every light source send a finite number of rays into the scene, where more rays equal a better potential for quality but at a cost of performance.
  3. For every intersection, test if the intersection has occurred within the viewing volume.
  4. For every intersection, test that it does fall within the viewing volume and shade the image pixel for that ray based on the sphere’s and light’s (also scene’s) properties.
  5. Save the image to a file, display it to the screen, or use it in some meaningful way to your application.

This quick overview of forward ray tracing does not appear complex at first glance, and a ray tracer that did only what was just described would be pretty bare-bones. The pseudo-code used to perform the following list of steps could look something like the following:

function Forward_RayTracing(scene)
{
    foreach (light in scene)
    {
        foreach (ray in light)
        {
            closestObject = null;
            minDist = "Some Large Value";
            foreach (object in scene)
            {
                if (ray->Intersects(object) && object->distance <
                                                   minDist)
                {
                    minDist = object->distance;
                    point_of_intersect = ray->Get_Intersection();
                    closestObject = object;
                }
            }
            if (Is_In_View(point_of_intersect) && closestObject !=
                                                   NULL)
            {
                Shade_Pixel(point_of_intersect, light,
                            closestObject);
            }
        }
    }
    Save_Render("output.jpg");
}
Enter fullscreen mode Exit fullscreen mode

Assuming that you have a scene object that holds a list of lights and geometric objects, a ray structure, a function to test if a point of intersection falls on a pixel of the rendered image, and a function to shade that pixel based on the scene’s information, then you can create a complete, yet simple, ray traced image using the pseudo-code above. The pseudocode is just an example and does not take into account textures and other material properties, shadows, reflections, refractions, and other common effects that are used in ray tracing rendering engines.

If you were to code an application that uses forward ray tracing, you would quickly realize that many problems are associated with the technique. For one, there is no guarantee that the rays sent from a light source would eventually strike an object that falls within the viewing volume. Also, there is no guarantee that every pixel that should be shaded will be shaded, which would only happen if one of the rays that were traced from the light sources happened to strike it.

In nature, light rays bounce around the environment many times while moving at the speed of light. There are so many light rays in nature that, eventually, every surface is hit. In the virtual world we need to send more and more distributed rays throughout a scene to shade an entire environment. The processing of each of these rays is not done at the speed of light, and the processing of one ray could be costly on its own. When using forward ray tracing, we have to find some way to ensure that all screen pixels that should be shaded have at least one ray strike them so that the point of intersection is shaded and is visible in the final render

The biggest problem with forward ray tracing is that the more rays we use, the slower the performance is, and to have the potential for an increase in quality, a lot of rays need to be used. In the pseudo-code we have to trace every object in the scene against every ray for every light. If there were 100 lights, each sending 10,000 rays into the scene, a program would need millions of intersection tests for every object in the scene, and this would only be for one rendered frame that does not use any techniques that cause additional rays to be generated. The number of rays can increase exponentially if reflections are also included, which cause rays to bounce off of objects back into the scene. Global illumination techniques that use light bounces in the calculation of lights and shadows also dramatically increase the number of rays and information used to complete the scene. Other techniques including anti-aliasing, refractions, volumetric effects, and so forth increase this effort exponentially. With the required number of operations needed for even a simple scene, it should be no wonder why ray tracing has traditionally been a technique that was only used for non–real-time applications.

Forward ray tracing can be improved. We can try to limit the number of rays that are used; we can use spatial data structures to cull out geometry from a scene for each ray test and other such improvements, but the application will still be performing many tests that end up being a waste of CPU processing time, and it would be very difficult to determine which rays would be wasted and which ones would not. Optimizing a forward ray tracer is also not a trivial task and could become quite tricky and complex. Even with optimizations, the processing time is still quite high compared to the next type of ray tracing we will discuss.

The downsides to using forward ray tracing include the following:

  • CPU costs are far beyond other methods used to perform ray tracing on the same information
  • The quality of the rendered result depends highly on the number of rays that are traced in the scene, many of which could end up not affecting the rendered scene.
  • If not enough rays are used or if they are not distributed effectively, some pixels might not be shaded, even though they should be.
  • It cannot be reasonably done in real time.
  • It is generally obsolete for computer and game graphics.
  • It can be harder to optimize.

The major problem with forward ray tracing is that regardless of how many rays we generate for each light, only a very small percentage will affect the view (i.e., will hit an object we can see). This means we can perform millions or billions of useless intersection tests on the scenes. Since a scene can extend beyond the view volume and since factors such as reflections can cause rays to bounce off surfaces outside the view and then back into it (or vice versa), developing algorithms to minimize the number of wasted light rays can be very difficult. Plus, some rays might start off striking geometry that is outside the view volume and eventually bounce around and strike something that is in the view volume, which is common for global illumination.

The best way to avoid the downsides of forward ray tracing is to use backward ray tracing. In the next section we look into backward ray tracing and implement a simple ray tracer that supports spheres, triangles, and planes.

Backward Ray Tracing

Backward ray tracing takes a different approach than that discussed so far in this chapter. With backward ray tracing the rays are not traced from the light sources into the scene, but instead they are traced from the camera into the scene. In backward ray tracing every ray that is sent is traced toward each pixel of the rendered scene, whereas in forward ray tracing an application sends thousands of rays from multiple light sources. By sending the rays from the camera into the scene, where each ray points toward a different pixel on an imaginary image plane, they are guaranteed to be shaded if the ray intersects any objects.

A visual of backward ray tracing is shown below.

Ray Tracing

Assuming that one ray is used for every screen pixel, the number of minimum rays depends on the image’s resolution. By sending a ray for each pixel the ray tracer can ensure that all pixels are shaded if that pixel’s ray intersects anything. In forward ray tracing the traced rays do not necessarily fall within the view volume all the time. By sending rays from the viewer in the direction the viewer is looking, the rays originate and point within the view volume. If there are any objects along that path, the pixel for that ray will be shaded appropriately.

The algorithm to take the simple forward ray tracer from earlier in this chapter and convert it to a backward ray tracer can follow these general steps:

  • For each pixel create a ray that points in the direction of the pixel’s location on the imaginary image plane.
  • For each object in the scene test if the ray intersects it.
  • If there is intersection, record the object only if it is the closest object intersected.
  • Once all objects have been tested, shade the image pixel based on the point of intersection with the closest object and the scene’s information (e.g., object material, light information, etc.).
  • Once complete, save the results or use the image in some other meaningful way.

The algorithm discussed so far for both types of ray tracers does not take into account optimizations or factors such as anti-aliasing, texture mapping, reflections, refractions, shadows, depth-of-field, fog, or global illumination.

The steps outlined for a simple backward ray tracer can be turned into pseudo-code that may generally look like the following:

function Backward_RayTracing(scene)
{
    foreach (pixel in image)
    {
        ray = Calculate_Ray(pixel);
        closestObject = null;
        minDist = "Some Large Value";
        foreach (object in scene)
        {
            if (ray->Intersects(object) && object->distance <
                                               minDist)
            {
                minDist = object->distance;
                point_of_intersect = ray->Get_Intersection();
                closestObject = object;
            }
        }
        if (closestObject != null)
        {
            foreach (light in scene)
            {
                Shade_Pixel(point_of_intersect, light,
                            closestObject);
            }
        }
    }
    Save_Render("output.jpg");
}
Enter fullscreen mode Exit fullscreen mode

Backward ray tracing is also known as ray casting.

In the pseudo-code above the simple backward ray tracing algorithm is similar to the one for forward ray tracing but with a number of key differences. The major difference between the two can be seen with the start of the algorithm. In a forward ray tracer we trace X amount of rays from each light source. In the backward ray tracer a ray traveling toward each screen pixel is traced, which greatly reduces the number of rays used in a scene compared to forward ray tracing without sacrificing the final quality at all, assuming the same conditions exist in both scenes.

The breakdown of the lines of code in the pseudo-code sample for the backward ray tracer is as follows:

  • For each pixel that makes up the rendered image, create a direction that points from the viewer’s position toward the current pixel, which is a vector built from the pixel’s X and Y index and a constant value used for the depth.
  • Normalize this direction and create a ray out of it based on the viewer’s position.
  • For each object in the scene, test if the ray intersects any of the objects and record the object closest to the viewer.
  • If an object was found to have intersected the ray, color the pixel that corresponds to the ray by adding the contributions of all lights that affect this object.
  • Once complete, save the image to a file, present it to the screen, or use it in some meaningful manner.

The pseudo-code samples do not describe complex ray tracers or ray tracing–related algorithms. Although simple in design, the pseudo-code ray tracers do provide an impressive look into what it takes to create a ray tracer, and they should give you an idea of the amount of work, CPUwise, done by the processor even for simple ray tracing. The key idea is to use ray intersections to gradually build the rendered image. Most ray intersections are CPU expensive, and doing hundreds of thousands or even millions of these tests can take a lot of processing power and time. The number of objects that need to be tested for can also greatly affect the number of intersection tests that must be performed.

When you create a ray based on a screen pixel, the origin of the ray can be based on the viewer’s position, for example, and the direction can be based on the pixel’s width and height location on the imaginary image plane. A pixel value of 0 for the width and 0 for the height would be the first pixel, normally located at the upper-left corner of the screen. A pixel with the width that matches the image’s width resolution and a height that matches the height resolution would be the lower-right corner of the image. All values in between would fall within the image’s canvas.

By using the pixel’s width, height, and some constant for the depth (let’s use 255 for example), a vector can be built and normalized. This vector needs to be unit-length to ensure accurate intersection tests. Assuming a variable x is used for the pixel’s width location and a variable y is used for the height, the pseudo-code to build a ray based on the pixel’s location may look like the following, which could take place before object intersection tests:

ray.direction = Normalize(Vector3D(x - (width / 2),
                                   y - (height / 2),
                                   255));
Enter fullscreen mode Exit fullscreen mode

Note that the above sample adjusts x and y so that the origin marks the center of the image and the middle of the screen, which marks the middle of the camera, is pointing straight forward. Once you have this ray, you can loop through each of the objects in the scene and determine which object is closest. The object’s information and the point of intersection are used to color the pixel at the current location.

Top comments (0)