Due for UK release in April, the live-action film adaptation of Dr Seuss' best-selling children's book The Cat in the Hat stars Mike Myers as the mischief-making feline in the striped stove-pipe hat who unleashes merry chaos into the world of two bored children left alone on a rainy afternoon. Conjuring up the whimsical world of Dr Seuss for the film was the task facing leading visual-effects studio Rhythm & Hues.
The studio rose to the challenge to create over 270 shots of visual effects and animation including a fully CG character, 3D matte paintings and 3D digital set extensions, along with numerous volumetric effects such as clouds, vapour, flying globs of goo, and the tornado-like Vortex that's the source of most of the Seussian madness.
Work was organized around several lead technical directors (TDs), who decided upon the look and technique of each shot before passing these on to teams of one to four artists to complete.
Rhythm & Hues' effects artists used Side Effects Software Houdini at the heart of its production pipeline on the project, developing custom tools using Houdini's VEX programming language and its new VOPs visual code development system. "Houdini is a very powerful, mature, and elegant 3D software system," comments Caleb Howard, Rhythm & Hues' visual-effects lead on the film.
According to Howard, the biggest challenge he faced on the project was the so-called Vortex shot. In the film, a major turning point occurs when the Cat in the Hat's box is opened, releasing a Pandora-esque volume of troubles from the Seussian world. A churning tornado of pink mist and purple goo emerges from the open box, scooping up loose objects from the house and transforming the house's appearance to a more cartoon-like one.
"There were many things this effect needed to achieve. The tornado itself needed to be believable, and believably integrated into the photographed set. On its own, this entailed the integration of two photographic plates (the normal house interior, and the Seussified house interior), a digital version of the transforming interior, a pink digital tornado, the purple goo, the actors who were shot against a greenscreen separately, and the digital debris," explains Howard. "Ignoring the technology and effort required to create each of these elements, the very composition of all of these together into a seamless and coherent whole was a very complex task."
The use of volumetrics had a direct impact on the render times involved. Unlike surface models that calculate the interaction of light with a material at a few specific points, volume models calculate this interplay at every point along the light's path as it penetrates deeper into the material - a process known as line-integration. The result is more mathematically intense to compute and hence far longer render times.
Rhythm & Hues developed and used a suite of volumetric tools called Virtual Cloud Tank or CT for short, which feature custom plug-ins for Houdini. Included in these is a ray-marcher renderer, with options to adjust several parameters such as ray-termination heuristics - the depth to which light penetrates - and the size of each step along a ray of light's path to compute.
"There's a lot of parameters to understand, and to tweak," says Howard. "Getting it wrong can easily make an image require near-infinite times to compute. When it was just humming along on The Cat in the Hat, the fast renders would take less than an hour a frame."
A fish in and out of water
"The whole game is reducing the number of samples required, and in axing the least significant samples first. A good first optimization on a ray marcher is to stop adding up density samples along a ray when the sum reaches fully opaque, as you can't see anything beyond a certain depth into thick smoke. A more sophisticated sample reduction technique adapts the step size - taking bigger steps where the density is pretty constant, and taking a more closely measured look at regions where the density fluctuates rapidly. So-called adaptive step-size algorithms let you put the most samples where there's more detail to look at," he explains.
For the purple goo that explodes from the box and transforms the house, Howard and his team were asked to create a specific look. The concept art called for the look and behaviour of a quasi-intelligent, gravity defying viscous liquid with specular ringing around the edges of any particular blob of goo. The goo - dubbed 'chicken fat' says Howard - was created with particle systems with metaballs attached to each particle. A noise field was then used to disrupt the metaball function to add areas of more detail. The specular ringing around each blob was accomplished by a shader that ran a sine wave over the surface, indexing on the relationship between the angle at the point on the surface and the viewing angle, explains Howard.
The final effect consists of many elements that were separately rendered before being merged back together in the composite. Spherical HDRI (High Dynamic Range Imaging) photos were taken of the live-action set and used in the reflection. The refraction, says Howard was "a dreadfully inaccurate, but astonishingly rapid fake-up" that warped the photographic plate behind the goo to boost the cartoony effect of the scene.
"Motion blur is a bitch in volumetric composition," says Howard. "It's sort of a double-whammy of blurriness, with the softness of the volume being compounded by the softness caused by the motion blur of the objects." Calculating the contribution of the three layers - the motion blurred soft volume in front of the object, the blurry object itself, and the motion-blurred soft volume behind the object was complex, he recalls. "Compensating for the fact that the volume and the objects were being motion blurred using different software was especially troublesome, as the inaccuracies tend to reveal themselves as ugly edges and hard lines in the image," says Howard.
To work around this problem, the artists created a bounding object for each piece of debris. So, for example, the bicycle's geometry was copied several times along its trajectory to give a hard surface to the shape of the bike stretched along its path. The volume effect in front of the stretched object was rendered separately from the volume of everything behind the stretched object. "In the composite, putting these together one over the other yielded the volume as if it had been rendered completely without debris, but if you put the motion blurred debris between the front and back volumes, the effect was seamless," explains Howard.
Balancing the pursuit of technical accuracy with the time constraints off film making is one of the chief talents required of visual-effects artists, according to Howard. Too much technical accuracy can hinder an effect he says, when the answer is just to "fake it".
"We're in the business of fooling people into believing that they're seeing something that's real, in some sense. Most simulation programmers are taught by the academic papers they read, to expect that the most accurate solution is always preferred. The thing is, it's almost always possible to generate a solution which is 80 per cent accurate using only 20 per cent of the computation that a totally realistic solution would require," he says. "So, the trick is then to build simulation tools that take advantage of techniques to 'cheat' the realism, and save a lot of computation time. To me, this is where the artistry lies."
|Software:||Side Effects Software|
|Contact:||Rhythm & Hues, +1 310 448 7500|