...

/

Ray Tracing vs. Rasterization

Ray Tracing vs. Rasterization

Learn the basic principles behind ray tracing and rasterization and what distinguishes the two techniques.

Overview

In the thin lens equation, we have a model for how light strikes an image plane. The projection models cover how light propagates through a scene to reach our camera. Ray tracing is the next logical step: constructing an entire image by modeling light propagation through a 3D scene. We first cover ray tracing very briefly, as it is perhaps the most intuitive rendering technique and widely used in the film, animation, and 3D design industries. Then, we take a closer look at rasterization, a more efficient technique that PyTorch3D relies on.

The ray tracing algorithm

Ray tracing is a class of algorithms with many commonly known variations, such as path tracing and photon mapping. These techniques all have one thing in common: they form images by modeling the physics of light transport from various light sources to a given camera. Ray tracing can model all sorts of real-world light transport effects, including reflection, refraction, scattering, ambient occlusion, subsurface scattering, and more.

Press + to interact
The ray tracing algorithm
The ray tracing algorithm

In practice, modeling all the possible beams of light that could pass through a scene before reaching a camera would be computationally intractable. Instead, noting that only a small subset of this light actually reaches the camera, ray tracing walks backward from each pixel into the scene. For a given pixel, we cast a ray from the camera’s origin, passing through the image plane, and into the scene. We then model the effects of light with whatever this ray intersects.

The basic algorithm follows this sequence:

  1. For each pixel, cast a ray that passes through the camera center and pixel center into the scene, also called a view ray.

  2. Where it intersects with an object, cast another ray to the light source, this is a shadow ray.

  3. If this ray does not intersect any other objects, illuminate the point. Otherwise, do not illuminate this point.

This model can get more complex as more objects and light sources are added.

The rasterization algorithm

Ray tracing starts from the image plane and works its way into the scene. In other words, it is an image-centric technique. Conversely, we could start from the objects in a scene and work our way to the image plane, which would be an object-centric technique. Rasterization is one such technique.

Here is how it works: for every face in a scene, its vertices are projected onto the image plane, using the transform_points_screen function. For any pixel that is contained within the polygon, we can apply shading to produce a color.

Press + to interact
The rasterization algorithm
The rasterization algorithm

Recall that rasterization first determines relevant polygon faces for shading a pixel and then z-buffering selects the closest face to be used in the shading process. The PyTorch3D rasterizer follows this approach but makes some key modifications.

The PyTorch3D renderer

PyTorch3D implements a differentiable rasterizer for efficiency and modularity. This rasterizer is based on the SoftRasterizer technique (SoftRas), a rasterization technique that makes the following two major contributions to the traditional rasterization process:

  • Probabilistic pixel influence

  • Aggregation

Probabilistic pixel influence

Traditional rasterizers rely on a single face to color a given pixel. SoftRas instead defines a probability distribution over nearby faces. Therefore, each pixel considers the color of a combination of faces, weighted by the probability of each face contributing to the pixel’s value. This has the effect of blurring individual faces together, so that a single pixel is influenced by a combination of faces weighted by distance.

For a given pixel pip^i and face fjf_j, the probabilistic pixel influence produces a probability map DjiD^i_j ...