Originally, the GeForce4 MX core was not that well
received by the hardware community, with the biggest qualm resting on
its name. Since the GeForce4 MX is based on GeForce2 MX technology, the
GeForce4 MX doesn't really deserve the 4 in GeForce4 MX name. In fact it's not even worthy of
a GeForce3 MX title, since the GF4 MX core doesn't have any DirectX8 features.
Since the GeForce2 MX is usually on par with a GeForce
DDR in terms of performance, it's too easy to assume that the GeForce4 MX
is close to the performance of GeForce3 line of cards which is very far from the
truth. That being said, it's a shame that the GeForce4 MX got off to such a bad
start since it's quite powerful in today's games.
One of the biggest
upgrades the GeForce4 MX received was Lightspeed Memory Architecture II (LMA II). This
is very similar to what's found on the GeForce4 Ti line of cards. The MX400 also
has the same multisampling AA engine, nView, and nVidia has upgraded the
Visibility sub system as well
LMA II on the GeForce4 MX is setup a little
differently from the LMA II on the GeForce4 Ti. Instead of having four 32bit
independent memory controllers (4x32=128bit DDR) the GeForce4 MX has two 64bit
independent memory controllers (2x64=128bit DDR).
From looking at this, it's obvious that the
GeForce4 MX LMA II isn't as efficient as the one found on the GeForce4 Ti, but
it's a very nice upgrade over the memory controller found in the GeForce2 line
of cards. If you're interested on how LMA works, please read this.
Accuview AA is only available in Direct3D games at the moment, and it
tackles one of the major problems found with AA enabled blurry images. Accuview
AA solves this problem by moving the subpixel taken for reference to
inside the actual pixel, instead of on the edge like in Quincunx AA. This gives
the videocard a more accurate read on colour, and when AA is enabled should
produce an image that is sharper with less colour error. For more info on how
Accuview AA works please look at this.
With 8X AGP, the bandwidth between the computer and videocard has
been doubled from 1.06 GB/s (4x AGP) to 2.1 GB/s. This essentially allows various
data (shaders, 3D models, textures, etc) can travel to to the GPU and back
potentially twice as fast. Hopefully 8X AGP will do more for performance then
the jump from 2x AGP to 4x AGP did.
The stock MX440/MX440-8X is already quite
powerful so I was quite interested to see just how high this little card
could would go when overclocked. After all more performance= happy :)
The heatsinik is attached to the MX440-8X
core with thermal paste, and the ramsinks with frag tape, not the best
method but better then nothing. Anyway,
I began to up the core speed slowly since the core is factory
overclocked already; 310 MHz, 320 MHz, and 330 MHz saw no problems what so ever.
At 350 MHz, 360 MHz and 370 MHz and the card still ran
smoothly. It finally hit the limit at 387 MHz core speed. Anything higher and
3DMark would lock up!
In the past, we've had some pretty good luck
overclocking Samsung memory and were really hoping for a bit more this time.
Clocked at its maximum of 550 MHz (1000MHz / 3.6ns x 2 DDR = 555 MHz) we
started raising the memory frequency. To keep thinks short, the maximum speed we were able to hit with the memory
was 613 MHz. Anything higher and artifact tester (found here) would start
As you'll see in the benchmarks on the
following pages (check out UT2003 especially) with a clock speed of 387 MHz and
613 MHz memory speed, the Prolink PixelView GeForce4 MX440-8X is a pretty
formidable opponent, even for newer videocards!