Special effects are now a central part of the moviemaking process. But as audiences demand more, effects houses are being forced to ramp up their processing power like never before.
By the end of the three-part project, the New Zealand-based company had built a massive, 3,200-processor 3D rendering server farm to cope with the load. The installation is ranked on the top 500 supercomputer list as one of the world's largest supercomputer sites. With some 2,400 of those processors residing on blade servers (small rack-mounted servers), and the remainder on traditional servers, it's also one of the most compact.
Weta and other visual effects studios are rapidly turning to large clusters of blade servers, often running Linux, as they balance the need for more processing power with the desire to minimize costs and maximize the use of valuable floor space.
Special effects are playing an increasingly large role in movies because audiences want them, says Greg Butler, digital computer graphics supervisor at Weta. "Film audiences expect visual effects to keep blowing them away. The only way this is possible is through the constant upgrading of our infrastructure," he says.
With the Lord of the Rings trilogy, the number of visual effects shots started at 540 in the first film and roughly doubled for each of the next two movies. Industrial Light and Magic (ILM) in California, faces similar pressures. "In the first Jurassic Park movie, we did 75 shots," says Chief Technology Officer Cliff Plumer. "Now, with a Star Wars movie, every shot has some effect in it," totalling 2,000 to 2,500 shots per film.
The processing power required to render even a few shots is significant, says George Johnsen, chief animation and technical officer at Threshold Digital Research Lab. "In the visual effects business, there's no end to how many computers you can use," he says. A single shot can range from a few seconds to several minutes. Each second of film includes 24 frames, each containing up to 4,996-x-3,112 pixels in 32- or 64-bit colour. Separate passes must be made for each object that requires rendering in the frame and for attributes such as texture, lighting, and reflections.
"In the forthcoming movie Foodfight!, there is a scene with 13,000 extras, and they all have animation cycles," says Johnsen. And artists often repeat the rendering process to improve quality. As many as 150 passes may be required - a frame can be processed on only one CPU at a time and takes 48 to 72 hours to complete, he says. Threshold Digital already has 512 processors in its render farm and plans to double that using IBM eServer BladeCenters equipped with dual-processor HS20 server blades in the next three months.
The move to blades has been swift. Like many other studios, ILM was using stand-alone workstations from Silicon Graphics to render images three years ago. Today, it has a 2,000-processor render farm, affectionately named Death Star, and half of the processors in it reside on blade servers from Boston-based Angstrom Microsystems. The blades are "taking over quickly," Plumer says. At night, all of ILM's desktop computers are added to the render farm as well. "Our processes are working 24 hours a day, seven days a week," he says.
As is the case in other industries, the studios are demanding more from IT while budgeting less. "Budgets aren't what they were. Blades allow us to be more efficient," says Johnsen.
Server blades are more efficient to deploy and manage. "We can get a system in-house and online within two days, where historically it would take us about a week to build a rack of processors," says ILM's Plumer.
While working on The Two Towers, the second movie based on J.R.R. Tolkien's Lord of the Rings novels, Weta suddenly found that it needed more horsepower. "We put in 500 processors in about three weeks, including building a new machine room," says CTO Milton Ngan. Weta uses BladeCenters running dual-processor HS20 server blades. The server racks, fully loaded, hold 84 blades, or 168 processors - a significant improvement over the density of Weta's 1U servers.
Management is also more automated. "With previous systems, we'd have to physically go to each machine," Ngan says. The management software for IBM's BladeCenter, IBM Director, lets Weta use scripting to remotely configure blades, update BIOSs and other firmware, and reboot or turn individual blades on or off over the network.
The increased processor power and density of blade server farms has yielded significant benefits, but it has also presented some unexpected challenges. "We packed (the blades) in pretty tight, then ran into power and cooling issues," says Plumer. Individual blades use less power - for example, IBM says its HS20s are 57 per cent more efficient than its 1U servers - but the blades are packed in much more densely, pushing power within each fully populated rack as high as 15 kilowatts for the BladeCenter and even more for some other blade server designs.
At ILM, more reliable blades, more efficient rack designs and the ability to spread out the blades to better balance the heat load in the room seems to have solved the problem, Plumer says.
Weta has dealt with hot spots, but more improvements are needed, Ngan says. One small room containing 1,000 processors has a concrete floor and a low ceiling that ruled out having a raised floor. Weta sealed the racks to improve airflow, installed three air conditioning units and piped air into the fronts of the racks to cool the blades. The blades no longer overheat, but the air temperature at the top of the rack is just under 85 degrees (75 degrees is the recommended maximum). "We are building a new machine room that will be better equipped to deal with blades," Ngan says.
Threshold Digital has a rack that uses IBM's Calibrated Vectored Cooling design to optimize airflow. Johnsen also installed an air conditioning unit that injects air into the top of each rack and exhausts out the bottom.
"We've had to do some very serious air conditioning to fill the racks up. Instead of feeding the room, we're feeding the racks," he says. "We're dumping five tons (60,000 BTUs) of AC into the racks." The power requirements surprised Johnsen, but "because we have four times the density of processors, that was a fair trade-off," he says.
The concentration of power created by migrating to blade server farms has also had ripple effects on the rest of the studios' IT infrastructures, which were designed to accommodate graphics workstations. "The faster processors have put a strain on our storage systems, which put a strain on the backup systems, which put a strain on our network," says Plumer. "On an average day, we push 75TB of data across our network." ILM's new data centre, due to open next year, will include a 10 Gigabit backbone. Weta has already moved to 10 Gigabit ethernet.
As for storage, Weta has more than 60TB of network-attached storage (NAS) on 1,100 disk drives under the control of 17 filers. But processing many similar frames in parallel created bottlenecks. "When you have a couple of hundred processors wanting the same data, a single file server can't handle that," says Ngan. Weta spreads the files across multiple filers and developed a "virtualized global file system" to improve performance.
Threshold moved to a Fibre Channel storage network and IBM's General Parallel File System, a high-performance cluster file system that supports concurrent file access. ILM is using a combination of NAS and storage-area network devices as well as near-line storage to deal with the large volumes of data that move on and off the network with each project. A single shot can require a terabyte of storage, Plumer says, up from a few hundred megabytes a few years ago, while the work on a single film may generate in excess of a petabyte of data.
All three studios are looking for ways to get even more out of render farms. ILM plans to double the size of its data centre to more than 12,000 square feet next year. As Weta begins production on a remake of King Kong, Ngan says he is contemplating phasing in blades that use Advanced Micro Devices' 64-bit Opteron chips or Intel's Extended Memory 64 Technology. He expects the larger memory space afforded by the 64-bit CPUs to speed processing times.
Threshold's Johnsen says the ultimate goal is to break the cycle of using one processor to render each frame. "This is why we are desperately developing multithread and relational grid strategies for our rendering," he says. Johnsen sees blades as a key part of the company's 10-year plan. "The natural progression is to some form of 3D grid computing ... and blades are the next logical step," he says.
Nvidia GeForce GTX 1080 vs GTX 1070: What's the difference between GTX 1070 and GTX 1080? A price...