- Unbelievable power
- Great idle power loads
- Buggy video transcoding
The reviews have been out for long enough, and the cards are available on shelves for the most part. I’ll save you the trouble of going any further to answer your burning question: yes, the ATI Radeon HD 5870 is the fastest single-chip video card on the market and perilously close to being the fastest card period. It is roughly as fast as its dual-chip predecessor, the ATI Radeon HD 4870 X2. It’s typically fairly close to Nvidia’s dual-chip GeForce GTX 295. And now that I have one in my hands, I can tell you it is fast. Very fast.
Prior to this card I’d been using a Radeon HD 4870 1GB and a GeForce GTX 280, neither one exactly a slouch. The 4870 and GTX 280 offered surprisingly similar gaming experiences for me, and that gaming experience was excellent, leaving very little left to be desired. But personally I’m not an NVIDIA man (I’d been having issues with their drivers and my second monitor), and without a working Radeon I went ahead and pre-ordered an XFX Radeon HD 5870 off of Amazon, using a gift card and a certain family member’s discount.
If you wanted to be utterly nontechnical, you can consider the Radeon HD 5870 to be roughly twice as powerful as the Radeon HD 4870. It has twice the everything of the 4870 – shaders, texture units, render back-ends – except for the memory and memory bus, which is only marginally faster. A version with 2GB of GDDR5 instead of 1GB is announced and coming down the pipe, but I expect the difference will really only matter to users crazy enough to try Eyefinity, Crossfire, and Crossfired Eyefinity.
You’ll have to believe me when I say the 5870 is far more than enough card for a 1920×1200 monitor, or even a massive 30-inch 2560×1600. I have yet to find a game that doesn’t run at 1920×1200 with everything maxed out and anti-aliasing, and this includes Crysis and Crysis: Warhead. Under DirectX 10, they can handle settings maxed out with 2xAA; under the more performance-savvy DirectX 9, bump that up to 4xAA. Eager to find software that can push the card harder, I downloaded the demo to the oft-maligned and horribly optimized Cryostasis, and even that was perilously close to playable.
As you can see from the following benchmark scores, very little actually stresses the card and I think I’m actually CPU-limited by my 3.7GHz-overclocked Core 2 Quad. These games were run at the highest possible settings available in their respective control panels.
|Game||Minimum framerate||Maximum framerate||Average framerate|
|Bioshock: 1680×1050, maximum settings||102 fps||415 fps||194.64 fps|
|Bioshock: 1920×1200, maximum settings||90 fps||379 fps||182.32 fps|
|Left 4 Dead: 1680×1050, maximum settings||69 fps||266 fps||144.43 fps|
|Left 4 Dead: 1920×1200, maximum settings||66 fps||284 fps||137.85 fps|
|Call of Duty:WaW: 1680×1050, maximum settings||54 fps||94 fps||86.52 fps|
|Call of Duty:WaW: 1920×1200, maximum settings||57 fps||94 fps||87.61 fps|
I can say with certainty I’ve never been so blown away by the performance of a video card. The 5870 has such an excess of power that it’s almost difficult to justify the price. When 4870s and GTX 260s are running around offering plenty of performance, how do you make a case for twice as much?
ATI/AMD shrewdly answer this question with two new features.
In response to OEMs asking for more display output capabilities from the graphics chips, AMD/ATI introduced in the Radeon HD 5000 series a feature called “Eyefinity.” J.R. Talked briefly about Eyefinity in his write-up on the 5800s here, and having put it into practice I can tell you there’s a lot to like and a lot to keep it as a niche feature.
First, here’s what’s to like:
That’s Crysis: Warhead, running at 3840×1200 across my two 1920×1200 monitors. While I found it perfectly playable at all Enthusiast settings, the numbers didn’t necessarily back me up. Dropping the shaders setting down to Gamer and leaving the rest at Enthusiast, and running the game under DirectX 9, the Radeon HD 5870 averaged 30fps almost on the dot. Let’s be clear here: last generation’s single-GPU cards had trouble running it at all Enthusiast at 1920×1200, but the 5870 can more or less double the resolution and offer extremely smooth performance.
Outside of gaming, I honestly think Eyefinity could prove useful for anyone who does media work on their machines, or even just uses two or more monitors. The 5000 series supports up to three displays per card, an appreciated jump from two on, well, nearly every other non-professional card out there. Since the technology presents all of your monitors to Windows as a single screen, your taskbar can now stretch all the way across the bottom of your monitors. Windows 7’s docking features can make things even easier if you’re using software that stretches out over multiple screens.
That said, you don’t have to use Eyefinity if you’re not inclined, and there are a couple good reasons why.
The most obvious one is if you have just two screens. Most games focus action on the center of the screen, right where your bezel is going to be. It can cause a little vertigo at best, but makes the games much harder to play. The other major hang-up to Eyefinity is how other games support the odd aspect ratios that stem from using Eyefinity. Enemy Territory: Quake Wars ran beautifully with some choice commands entered into the console, and Crysis also ran fantastically. But game engines that are less flexible can offer very skewed images. Mass Effect, running off the Unreal Engine 3, probably would’ve been playable in game, but the awkward resolution resulted in menus being scaled to the point where you couldn’t see some of the options. It wants 16:9 or 4:3; 32:10 or wider isn’t on the menu.
If you run a game at a single monitor’s resolution in Eyefinity, the second monitor will simply act as a duplicate, and unfortunately this little thing is what keeps me from running it all the time. When I game I tend to leave e-mail, Firefox, and/or chat windows open on the second screen, and all of these things wouldn’t show up with Eyefinity enabled.