What is: Rendering?
 Join the Discussion
"Activeworlds offers free training.."
 Related Resources
• VR Research
• 3D Graphics General Info
• Multi-User

 From Other Guides
• Graphics Software
• Animation

Dateline: May 16, 2001

In the world of 3D computer graphics (CG), there are lots of stages to creating photo realistic images. First artists or designers have to model the world. In this process lots of polygons are created which describe precisely the objects in the scene. After the objects are designed, the scene must be painted or rendered onto the screen. In order to produce the rendering the computer scene must be lit with virtual lights.

Rendering is often the most time consuming phase in the production of a CG animation like Toy Story, or Shrek or those sorts of high end movies. Often the process of rendering is considered to go through a "pipeline". In actuality many computer graphics chips use an information processing pipeline to process the data from raw geometric primitives through to the final rendered image. There's a fabulous set of lectures in computer graphics by Thomas Funkhouser at Princeton. Click on the links for rendering and pipeline for a set of slides about the rendering process and a 3D graphics pipeline. Explore his directory there's lots of good stuff.

Those lectures from Princeton may get a tad technical and detailed, so here is a simplified version from simple minded me!

In addition to the geometry the other major thing to create is the lighting. Out in cyberspace no one can see anything unless it's lit. Lighting can become very complex and take an immense amount of time to get scenes to look just right. Lighting is typically defined using different types of lights with different mixtures of colors. If you think about a spot light or a flood light they have their computer analogs, and there are a few more.

Given the geometry and the lights we now need to take the 3D representation of a scene and turn it into a 2D set of pixels that get painted onto a display screen. In order to go from a 3D representation to a 2D image the rendering process must project the 3D scene onto the 2D surface. The projection from 3D to 2D and the subsequent image painting is the rendering process. In order to correctly paint the 2D image the rendering process must take into account all the lights that could affect the particular section of the image being rendered. Two common rendering processes are "ray tracing" and "radiosity".

Ray Tracing does just what it sounds like it projects a ray (a line) from the viewers synthetic eye until it hits the geometry in the scene, and then it bounces all around the objects in the scene. The resulting lighting produces images with clean crisp reflections and lighting. Actually all the result of bouncing around is modified by defining the "material properties" of the object. These are things like the reflectivity, diffuse lighting, specularity and a few more. From a high level authoring point of view some systems let you define objects as "plastic" or "mirrors" or "glass".

Radiosity is a completely different way of rendering a 3D scene. The idea is to calculate in a precise way, the amount of light emanating from a light source including the light that is reflected from various surfaces. The resulting images are extremely realistic and avoid the computery (that's a technical term) look often found in ray traced images. Lightwave from Newtek, is probably the most popular rendering system that uses radiosity for the rendering.

One of the cool things about rendering is that it can often be computed via parallel processing techniques. This mean that lots of different computers can be used to process a scene. If you think about the type of highly detailed rendering used for the movie "Shrek", it probably took somewhere in the neighborhood of at least 15 minutes on a high powered machine to compute the image for each frame. Given the awesome increase in computing power we've seen in the last few years it is quite reasonable to expect that in 3-5 years the machine on your desktop will be able to compute the very same scene at a frame rate of 30 fps (frames/per/second) which is the normal video frame rate (for NTSC). Imagine being able to not simply watch Shrek and the donkey go through their motions but to interact with them with ALL that detail. It will happen.

That should do it for this quicky lesson on rendering. If you're interested poke through those lectures on that Princeton site and learn lots more! May all your renderings approach real-time.

Sandy Signature

[Tutorials] [Web3D Technology Comparison] [3D Resources]
[Virtual Reality] [Art] [People of Web3D]
[Web3D Glossary] [FAQs] [Companies]