How do you see?
No really, how do you "see" things? You may say, that the screen in front of you creates an image, and that image is received by your eyes, where the image passes through your pupil and lens, and is received by your optic nerves - and you'd be right! The really interesting part of that though, is *how * that image gets to your eyes. On your phone, an image is displayed on a 2D panel, and the light emitted from that panel is transmitted to your eyes. Makes sense, photons and stuff. Now tilt your phone away from your eyes for a moment. Notice how the light from your phone dims a bit to your view, and if you're reading this in the dark, the light is projected elsewhere, but still has some color projected from it so you can sort of tell what color the thing is on your phone, but also now looks a little bit like whatever the light is hitting?
This is in essence, a mechanic of lighting that movies, games, and professional 3d renderers have been working to achieve for years.
Now let's think about a different sort of view. This one currently applies more to older 3d movies, and still, most games.
Let's picture us at the movies, in front of your TV, or at your computer. You are looking at a 2d screen, and whatever you're looking at on the 2d screen, is all you see. If you're looking at a game, whatever that you can see on the screen, is all you can see on the screen. The only time the view changes, is if something moves out of the way of something else. You don't see the shadow of the person standing around the corner from you, because you can't see the person. While the scene in front of you looks realistic enough, something is just...off.
Rasterization
I'll start with my second example so we can keep the good stuff for the second act. Rasterization is the task of taking an image described in a shape format, and converting it into a "raster image". A rasterized image is a series of pixels, dots or lines, which, when displayed together, create an image that was once represented by 3d shapes. This really works just as well to say, that the photo on your phone is a rasterized image. The photo you took at your the birthday party of your second cousin who you don't really care for, is a 2d image, with data representing 3d shapes. While the photo analogy of a 3d scene to a 2d image is close enough in comparison, it doesn't quite hit the mark on what I'm talking about. I'm talking about lighting, and how lighting in ray-traced images is far beyond what is possible for even the most advanced takes on rasterization.
Rasterization in 3d games and movies, is the process by which a scene is drawn. To rasterize an image, the view camera sends a point of light from a single pixel out toward the scene, where that point line will then intersect with an object, then return to the point where it came, sending back data about the object it hit. This data includes texture color, lighting intensity, shading of an object, and any post-processing injected when the light point returns to the view camera. This rendering method is very efficient, primarily because the amount of data collected is limited to a single reflected surface, and any properties that surface may contain. This makes rasterization ideal for video games, where the per-pixel screen space of a 1080p screen must refresh anywhere from 30 to 144 times per second, meaning that ray of light must be cast for all 2,073,600 pixels, minimum 30 times a second. Wild right? There are some games that do an absolutely incredible job at mimicking true-to-life light details, some, breath-takingly so. But, the difference is really in seeing the comparison. It's one of those odd tricks that ray-tracing achieves so well, that rasterization just can't.
Ray-tracing
So while rasterization has been the video game standard for a long time now, ray-tracing has been the movie industry standard for much longer. Do you remember seeing Toy Story when you were a kid? Like, the first one? That is a prime example of rasterization. While it looks good, the color of items is somewhat flat compared to the surfaces around it, and doesn't quite give the full color detail to the objects. Toy Story 4 however (I can't believe there's a fourth out now), utilizes ray-tracing to give that uncanny valley feeling of the world, but more because it's a cartoon, rather than the lighting. Eyes reflect the color of the surface they're looking at, a red apple on a white plate leaves a tinge of red around the base of the apple, the angled window pane shows the reflection of a person out of sight - ray-tracing is more true to real-world lighting because it follows real-world lighting.
As you look around, you know your eyes aren't shooting a beam of light out to see the things in front of you - it's the light sources around you that reflect off of the things you're looking at that make up what you see. When a photon particle is emitted from a light source, that particle will reflect or refract off or out of whatever surface or surfaces it collides with, and will pick up information about those surfaces until it meets your eye. Ray tracing is a method of modeling light transport theory, which deals with the mathematics behind calculating energy transfers between media that affect visibility. What this means is, that despite the higher computational load, the returned data is not what your eye can see, but all that your eye receives. The image you perceive is the collective color and light data retrieved from every particle bounce, and provides a level of quality akin to real life. The problem with this level of detail, is that it takes a much longer time to collect all the data needed to render the image fully. In a rasterized image, you may have at minimum two points of intersection - the object, and the screen view perceiving the item. In ray-tracing, you may have hundreds of photon collisions before that particle hits your view, so while the quality benefit is enormous, the computational cost is also enormous.
Remember, your eyes know what is real, even if you don't.
Top comments (0)