Header Ads

Ray Tracing : Creating Stunning Photo-Realistic Images


In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane. The technique is capable of producing a very high degree of photorealism; usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where the image can be rendered slowly ahead of time, such a in still images and film and television special effects, and more poorly suited for real-time applications like computer games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and chromatic aberration.

Ray Tracing is a global illumination based rendering method. It traces rays of light from the eye back through the image plane into the scene. Then the rays are tested against all objects in the scene to determine if they intersect any objects. If the ray misses all objects, then that pixel is shaded the background color. Ray tracing handles shadows, multiple specular reflections, and texture mapping in a very easy straight-forward manner.

Ray Tracing, in a one-line description, is a method that allows you to create stunning photo-realistic images on a computer. All you need is a computer, some ray tracing software, a little imagination and some patience.

The main drawback of ray tracing - it's not fast. The software actually mathematically models the light rays as they bounce around this virtual world, reflecting, refracting and generally having a good time until they end up in the lense of your imaginary camera. This can quite literally involve thousands and millions of floating-point calculations and this takes time. Tracing images can take anything from a few minutesto many days. It's a long process, I know, but the results can make it all worth while.

Ray tracing isn't the only method for creating photo-realistic pictures. There are packages like 3D Studio which uses scanline rendering, Radiance, which uses radiosity, and so on. Although these don't count as ray tracing, the methods you use from one system to the next are often sufficiently similar to warrant their discussion in this group. So if you think it's relevant, feel free to bring it up. These systems will be mentioned in a little more detail later on.

Note that ray tracing, like scan-line graphics, is a point sampling algorithm. We sample a continuous image in world coordinates by shooting one or more rays through each pixel. Like all point sampling algorithms, this leads to the potential problem of aliasing, which is manifested in computer graphics by jagged edges or other nasty visual artifacts.

The ray tracing algorithm builds an image by extending rays into a scene

In ray tracing, a ray of light is traced in a backwards direction. That is, we start from the eye or camera and trace the ray through a pixel in the image plane into the scene and determine what it hits. The pixel is then set to the color values returned by the ray.

Imagine this scene before you: you are holding and reading this magazine. Now imagine your eye as the “camera” in a 3D game—the camera is the object that is doing the viewing, while the magazine is the scene being viewed. In ray tracing an “eye ray” or a “primary ray” is projected from the viewpoint of the camera. Ray tracing uses physics simulations of propagation of rays to render an object. The algorithm first shoots the primary ray from the perspective of the eye. It then determines which object is hit first on the path of the ray. In this example, that object would be the magazine. At this point the magazine’s material will determine the behavior of the primary ray. If the material is transparent, the ray will pass through. If it is a mirrored surface, the ray will be reflected (angle of refection, etc. Calculated by the ray- tracing algorithm). If the material is a slightly glossy magazine paper—it will reflect some and absorb some, and so on. The ray tracing algorithm also determines if the object it hits is in shadow. To do this, it shoots off a ray from the magazine towards the source of light—if this ray hits the source, then the magazine can “view” the light source and is therefore lit. If the ray is obstructed, the magazine is under a shadow.

This is of course, an oversimplification of the process. The take-away information is that ray tracing takes a holistic look at the entire scene that is to be rendered, whereas a “rasterization” approach basically breaks down the entire scene into billions of component triangles. Due to the means used, the cost of raster graphics processing is linear with the number of pixels to be processed. On the ray tracing side: the cost of ray tracing increases linearly with the number of rays shot.
This recursive ray tracing of a sphere demonstrates the effects of narrow depth-of-field, area light sources, diffuse interreflection, ambient occlusion, and fresnel reflection.

Intel’s research into ray tracing showed an interesting behavior. If the scene being rendered is kept static, then there is a point at which software-assisted ray tracing technique is almost as fast as a hardware accelerated raster technique. Software assisted implies CPU driven. Furthermore, the research found that if multiple processors are thrown at ray tracing, the performance scales exponentially. Multiple processors became multi-core processors and will eventually become many-core processors.

Intel thus envisions a ray tracing technique to render graphics, instead of a raster approach. If you consider the architecture of the Larrabee processor—16 to 24 cores, each running at 1.7GHz at least, along with a dedicated unit to accelerate some ray tracing calculations and helped by an extended vector instruction set—you can see why ray tracing is the future for Intel.