Nvidia's Geforce GTX 285 is based on
the same NVIDIA GT200 GPU architecture that's inside the Geforce GTX 280, but
with a few noticeable tweaks, overclocks and a die shrink that has caused NVIDIA
to rename it the GT200b.
size of the naked nVidia GT200b GPU, it's not so
surprising that to hear it contains 240 stream processors and 1.4 Billion transistors. Thanks to a
55nm die-shrink NVIDIA is able to pack these transistors into a smaller package. While
the die of the original NVIDIA GT200 was 576mm2, the GT200b has been
brought down to approximately 425mm2. Still, nVidia is in desperate need
of a 40nm, or 32nm die shrink.
Default clock speed of the Geforce GTX 285 is 648MHz, on the Gigabyte
GV-N285OC-2GI it has been factory overclocked to 660MHz. GPU communicates over a 512-bit
wide memory interface, and it basically offers the highest core, shader
and memory clock speeds of any desktop NVIDIA graphics card.
GT200b graphics processing unit is very much a match for ATI's R700 GPU. While
the ATI videocards have more on-board stream processors and faster
memory, the NVIDIA videocards compensate by having faster faster shader clocks
and wider memory interfaces. The end result is a pair of GPUs that have
very different architectures, but still manage to achieve very similar
|NVIDIA GT200 vs ATI R700 Videocards|
|Radeon HD 4870X2
|Radeon HD 4890
|Radeon HD 4870
|Radeon HD 4850
|Geforce GTX 295
|Geforce GTX 285
|Geforce GTX 280
|Geforce GTX 275
|Geforce GTX 260
Of course, the Gigabyte
GV-N285OC-2GI videocard that PCSTATS is looking at today has been
substantially modified from the original Geforce GTX 285 reference design.
Gigabyte has doubled the amount of available memory to a whopping
2048MB of GDDR3. There are a few benefits to having such a huge amount
of on-board memory, most of which come when running games at very high resolutions
(think 2560 x 1600) or span multiple monitors. The
extra memory is also useful when it comes to turning on eye
candy like 16xAA. While most users won't have any conventional use for this
much video memory, this is far from a conventional videocard, and will likely
be used in very exotic system configurations.