The 8500 chip introduces three new features to the Radeon line of cards. As well as enhanced versions of
ATI's HyperZ, Charisma Engine and Pixel Tapestry (more on these later), we now have Smartshader,
Truform and Smoothvision. Hmmm... nice names. What do they do?
Smartshader is ATI's
Programmable vertex and pixel shader. Remember the fuss about the Geforce 3's
programmable nfiniteFX engine? This is essentially the same thing. Graphics
shaders are programmable operations that can be performed internally by the GPU
upon graphical data. A vertex shader enables programmable transform and lighting
effects to be carried out on vertex data as it passes through the geometry
processing stage, taking the burden of calculating complicated geometrical data
off the CPU, at least to some extent.
This enables developers to program their own geometrical
transformation and lighting effects, which should lead (in the next generation
of games) to much more detailed and interactive landscapes, characters and
particle effects, based more on physics than necessity. Pixel shaders perform
their operations on individual pixels as they are drawn during the rendering
stage, in the case of the 8500's pixel shader allowing up to 6 texture
operations (containing colour and lighting information, or instructions to
reference values from previously rendered pixels) to be performed in one
rendering 'pass.' This allows previously time consuming rendering operations
such as tracking multiple light sources to be rendered much more quickly, thus
gifting developers with a much wider range of pixel-level colour and lighting
operations that can be performed without overly compromising frame rates.
As with the vertex shader, the pixel shader is
programmable, allowing customizable lighting and texture effects. ATI is
focusing on the 8500's full support of the soon to be released DirectX 8.1 API,
the major new feature of which is version 1.4 of the pixel shader instructions
that are used to program the shaders in both the Nvidia and ATI GPUs. Version
1.4 enables 6 texture operations on a single pixel per pass, as opposed to 4 for
version 1.3 (which the GF3 currently supports) and previous versions.
In theory, this will allow the 8500 to render
more complex texture affects within a single rendering pass than the GF3 is
capable of. I say "in theory" because first of all, nothing besides ATI tech
demos currently uses the 1.4 instruction set. Secondly,
Nvidia has disputed ATI's claim to superiority, noting that the Geforce 3's
nfiniteFX engine is capable of the same amount of detail. Nvidia
also suggest that the two extra texture operations per pass supported by
the Smartshader will not add up to any real world speed increase. Finally, in
order for the 8500's 1.4 compatibility to be used, developers will have to code
for it. This means extra work and you can be sure that they will program
for the 1.3 pixel shader spec which the GF3 supports.
If Nvidia decides to integrate full DirectX 8.1 support into its chips at a
later date, things will become more clear, but at the moment it looks as though
software developers will be looking at two standards, and my bet is that they
will write for the most popular, which would be the 1.3 version.
Truform is, to me, the most interesting of the new 'features,'
and it's also the one which is likely to have the biggest impact on the way
today's games look. The basic idea behind Truform is to
exploit the capabilities of the GPU to draw large numbers of triangles very
quickly (more triangles equaling more on screen detail) without the
correspondent loss of performance that transferring a mass of extra vertex data
used to generate these triangles over the AGP bus would incur.
Iin simple terms, because I don't have the math skills to
explain it fully, this is the way it's achieved: The CPU passes data for a set
of three vertices forming a simple triangle, plus the value of the normal of
each vertex, to the GPU. The Truform process then generates 10 control points
from this data, one for each vertex, two evenly spaced along each edge of the
triangle and projected onto the plane defined by the normal of the nearest
vertex, and one in the center of the triangle. This gives the GPU a set of
points which it can use to describe a 3 dimensional curved surface over the
original flat triangle. This surface is then tessellated, or broken up into many
small triangles, at which point it is passed on to transform and lighting and
The effect of this is that for each set of three vertices
passed over the AGP bus to the card, the 8500 chip will draw not one large
triangle, but many, many smaller ones. This should have several advantages. The
additional triangles will effectively 'round out' 3D models, the added detail
translating to less blockiness, gun barrels that actually look curved close up,
More triangles making up a model also means more
realistic geometrical lighting will be possible, getting a little closer to
per-pixel lighting realism without the performance penalty. Due to the fact that
the Truform operation is done internally, it should incur much less of a
performance hit than trying to pass all the necessary vertex information over
the AGP bus would.
According to the ATI Truform white paper., the surfaces generated by the vertex data
will always have the same curve relative to the shape of the triangle, but the
amount of new triangles generated by tessellation will be variable. The biggest
plus to this technology, as far as I'm concerned, is that support for it can be
patched into current 3D games. Apparently counterstrike will get Truform support
sometime soon, and if this is true, and it does add to the visual quality of the
game as ATI is claiming, then this should mean good things for the 8500 card
when it comes out.
If you want a more mathematically based explanation
of how Truform works, go here.