Best Buy
  • Price: Fire GL, £645; Quadro £1,050 plus VAT

  • Company: ATI

  • Our Rating: We rate this 9 out of 10 We rate this 9 out of 10

Graphics cards, some would say, are at the heart of a professional 3D workstation. This is indeed true to a certain extent, though super-fast display graphics are not the be-all and end-all of a top-notch 3D system. But they’re still damn important. Here we test two cards from NVidia and ATI. Both are renowned for producing their own GPUs (graphics processing units) that are employed on ATI’s own boards and in boards from Elsa featuring NVidia’s chips. The Quadro board we tested was an NVidia-constructed test board, but is exactly the same as what Elsa will ship. The Quadro family of GPUs used in the NVidia card we have on test has been in production for a few years, and has steadily been improved. This new version of the GPU is the Quadro4, while the ATI card uses its own Radion 8800 chip. Measuring up to 3D The NVidia unit is two-thirds the size of a full length card, and is solidly built. On the board is a large fan and heatsink combination for cooling the main GPU. Aside from that, the card is unexciting to look at (not that this is really a problem). At the blunt end, there are two DVI outputs for dual flat-panel monitor setups. Connecting a CRT is still possible through the use of an adaptor, though. ATI Fire GL 8800 is a mid-range graphics card based on the Radeon 8800 GPU. The Fire GL range of cards is aimed at pro 3D-graphics markets and was previously owned by Diamond Mulitmedia who were bought out by ATI last year. The Fire GL name stays the same, but the chipset is different – hence the change in model number to 8800 (previous Fire GL cards used simple numeric names). Like the NVidia card, the ATI sports 128MB of DDR RAM, but is slightly smaller as it’s a half length card. It also has minimal cooling with a single, small fan above a tiny heatsink: which means the Radeon 8800 must be an efficient chip. What has caused some concern over the years is that NVidia’s Quadro chip seemed to be mostly the same as the cheaper gamer’s oriented GeForce GPU – though NVidia denies this. Indeed, some gamers have claimed that a GeForce card could be clocked to run like a Quadro – again denied by the company. Conversely, many 3D artists on a budget spurned the high-priced Quadro cards for cheaper, though less powerful GeForce cards. NVidia needs to make a clear distinction between the current GeForce 4 and Quadro4 ranges to win back some confidence. For the first time, the Quadro is a different chip than the GeForce range – however, it steals the edge not because of the chip, but via its throughput prowess. Where the Quadro4 really excels is when working on large scenes. In fact, in our test there was no degradation in performance when switching from 1,024-x-768 screen resolution to 1,280-x-1,024. This is a key differentiation between gaming and DCC, where games tend to be displayed at low resolutions – 1,024-x-768 and below – and content creators using apps such as Maya or Max tend to use extremely large resolutions. Direct X to the point Though both cards support Direct X, it’s the OpenGL performance we are concerned with, as this is the graphics API of choice for the majority of pro 3D-artists and digital-content creators. We tested general, subjective performance in NewTek LightWave Alias|Wavefront Maya, plus performed benchmark tests. It was clear that the Quadro4 performed well at high resolutions and high polygon counts. A scene with 94,000 polygons could be manipulated easily without slowing too much. Full interactivity was available at half that number (47,000 polygons), and the beginnings of a slowdown became noticeable at about two thirds (65,000 polygons). The Cinebench test showed that the card had a performance 1.68 times that of Cinebench’s software display – which uses the computer’s CPU for acceleration – on a 750MHz PIII. The FireGL performed extremely well, too. In LightWave 3D 7, the card produced the same time as the Quadro4 at both 1,024-x-768 and 1,280-x-1,024 pixels, and was slightly faster on the denser version of the scene. In Cinebench though, the card only managed an OpenGL factor of 1.43 times the speed of the computer CPU accelerated display. In the Maya test, the Fire GL felt slower, but sustained its interaction rate better at higher polygon counts than the NVidia card. The ATI card will also support Linux, though at the time of writing drivers were yet to ship. On Windows, the card supports NT 4 through 2000 to XP, while the NVidia card is specifically optimized to run under XP – we tested using Windows 2000 to keep the playing field level – and as most high-end creative apps are still awaiting XP certification. In tests, the two cards are similar in terms of performance, but the NVidia card costs a lot more. It has the edge in terms of out and out speed, though. However, the Fire GL is no slouch, and also has good throughput on larger scenes. If we were pressed, though, we’d choose the NVidia card simply because in both the Maya and Cinebench tests it was the faster card in moderate scenes. If your budget doesn’t stretch, then the Fire GL is a great card for the money.