The rendering process in the 3D production pipeline is one of the most technically complex, but can be related back to photography – as photographers, after having lighted and staged the scene, have to develop the images before being exhibited. After taking all the components of the production thus far, the artist builds the final scene to be viewed. As described by Slick, during the rendering process, “the entire scene’s spatial, textural and lighting information are combined to determine the colour value of each pixel in the flattened image,” (2016).
Essentially, the three dimensional scene is depicted as a two dimensional picture, taken from a specific location, perspective, rotation, scale and other specifications as decided by the artist. This is practically the equivalent of taking a photo of the physical model, scene and all its accompaniments. Rendering may add the simulation of realistic lighting, shadows, atmosphere, colour, texture and optical effects, or even the flipside, and design the image to appear abstract or artistic in nature (Birn, 2002).
Rendered images may differ from the animation environment itself, with the addition of textures or particle effects only displayed in their most basic form, wherein rendering details its full graphical capabilities (Sanders, 2017). This also means that during the rendering of highly complex animations, it can be time-consuming to generate the pixels in each frame. However, some programs’ low-quality render settings allow for faster test renders.
The two major types of rendering have a key difference: the speed at which the images are rendered.
Real-time rendering, used prominently in gaming and interactive graphics, the images are computed at a rapid pace, in conjunction with how the player interacts with the game environment (Slick, 2016). As it is practically impossible to determine how players will interact with the game environment, images are rendered in real time as the scene / game progresses. A minimum of 18-20 frames per second allows the motion to appear fluid, whilst any less makes it appear ‘choppy,’ (Slick, 2016).
This technique has been improved in recent years, with options like dedicated graphics hardware or GPUs allowing for a pre-compiling of as much information as possible. The majority of the game’s lighting environment is already pre-computed and imported directly into the texture files, improving render speed and time (Slick, 2016).
Mainly used where rendering speed is less looked over, offline or pre-rendering is frequently seen in animation or similar industries, with the photo-realism of environments, characters and objects and visual complexity are rendered to a higher standard (Slick, 2016). This allows for higher polygon counts, 4k and higher resolution texture files, although at the cost of a closed-end time frame.
Before and After rendering a shot of Monsters Univeristy. Retrieved from http://www.theverge.com/2013/6/21/4446606/how-pixar-changed-the-way-light-works-for-monsters-university
Birn, J. (2002) 3D Rendering (For Dummies). Retrieved from http://www.3drender.com/glossary/3drendering.htm
Sanders, A. L. (2017) In Computer Animation, What is Rendering? Retrieved from http://animation.about.com/od/faqs/f/In-Computer-Animation-What-Is-Rendering.htm
Slick, J. (2016) What is Rendering? Retrieved from https://www.lifewire.com/what-is-rendering-1954