nVidia got a bit left behind when ATi about a year and a half ago released the 9700 Pro. Not only did ATi manage to deliver their card long before nVidia, but later on when nVidia finally released their GeForce FX 5800 Ultra it also showed that they did not have much to come with. The successors GeForce FX 5900 and 59059 Ultra had more success, but most people’s opinion was that ATi’s products were simply superior when it came to both image quality and performance.
nVidia got a bit left behind when ATi about a year and a half ago released the 9700 Pro. Not only did ATi manage to deliver their card long before nVidia, but later on when nVidia finally released their GeForce FX 5800 Ultra it also showed that they did not have much to come with. The successors GeForce FX 5900 and 59059 Ultra had more success, but most people’s opinion was that ATi’s products were simply superior when it came to both image quality and performance.
In other words, ATi has been on top within 3D graphics for a year and a half, for the first time in the company’s career. Today nVidia stops that trend.
The card that we have had the honour to inspect the last two weeks is the GeForce 6800 Ultra and it is built on nVidia’s completely new NV40 architecture. nVidia has now left behind, perhaps not the entirely successful, FX series behind and therefore in some way have returned to the old methods since this card is a part of the GeForce 6 series. Already when we first started to listen to nVidia’s presentation in San Jose two weeks ago we started to realize what a monster this card would become. With 16 pipelines (twice the amount of the Radeon 9800 XT and four times the amount as GeForce FX 5950 Ultra), GDDR3 memory at ~1.1+ GHz, full support for the new DirectX 9.0c (Shader Model 3.0) and a long row of other impressing attributes it is very hard to not get impressed.
As we have seen before it is not enough to just look at the specifications. Lucky as we are nVidia was kind enough to send us home with a NV40 in our hands, and lately we have put the card through a series of tests and inspected the technology behind it. We begin by inspecting the theoretical facts.
As we mentioned in the introduction, nVidia have abandoned the name GeForce FX and now returned to the numerical designation. The new series of graphics cards will be a part of the GeForce 6 series. nVidia’s plan is to build all their graphics cards, from the bottom to the top, on the same technology. All features should be approximately 100% identical and the only thing which will make the cards differ is the performance. Keeping in mind that the flagship, GeForce 6800 Ultra, has 16 pipelines, high clock frequencies and a 256 bits memory buss it’s not hard to see that it is possible to peel the architecture considerably. At the bottom of the scale we might see a GeForce 6200 with only four pipelines, relatively low clock frequencies and 128 bits memory interface (pure speculation by me, nVidia has not announced any GeForce 6200 Ultra). We welcome the method with open arms since it means that game developers can start to work with effects demanding DirectX9 a lot faster then what was possible before.
The first two cards we know about in the GeForce 6 series is the GeForce 6800 and GeForce 6800 Ultra. To judge from previous experiences the two cards will be identical apart from the clock frequencies. Eventually the amount of memory will differ; we doubt for example that the Ultra version will be available with only 128 MB, while the non-Ultra version most likely will be available with both 128 and 256 MB memory.
Card/Circuit: |
GeForce FX 5950 Ultra / NV38
|
Radeon 9800 XT / R360
|
GeForce 6800 Ultra / NV40
|
Manufacturing process: |
0.13-micron
|
0.15-micron
|
0.13-micron
|
Transistors: |
~130 mil.
|
~115 mil.
|
~222 mil.
|
GPU speed: |
475
|
412
|
400
|
Pixel Pipelines/Pixel Fillrate: |
4 (8) / 1900 MP/s
|
8 / 3296 MP/s
|
16 (32) / 6400 MP/s
|
TMU’s/Texel Fillrate: |
2 / 3800 MT/s
|
1 / 3296 MT/s
|
1 / 6400 MT/s
|
Memory speed: |
950 MHz
|
730 MHz
|
1100 MHz |
Type of memory/bandwidth: |
256-bit DDR / 30,4 GB/s
|
256-bit DDR / 23,4 GB/s
|
256-bit GDDR3 / 35,2 GB/s
|
Pixel Shader: |
2.0a
|
2.0
|
3.0
|
Vertex Shader: |
2.0a (2 pieces)
|
2.0 (3 pieces)
|
3.0 (6 pieces)
|
FSAA: |
2x RGMS, 4x OGMS + MS/SS comb.
|
6x RGMS + Gamma correction
|
4x RGMS + MS/SS comb.
|
Centroid Sampling:
|
No
|
Yes
|
Yes
|
Aniso: |
8x
|
16x
|
16x
|
Standard outputs: |
1x VGA, 1x DVI and 1x “S-Video” (HDTV)
|
1x VGA, 1x DVI and 1x “S-Video” (HDTV)
|
2x DVI and 1x “S-Video” (HDTV)
|
Recommended PSU: |
350w
|
300w
|
480w
|
Amount of PCI spots: |
2x
|
1x
|
1x
|
A quick look on the table above directly reveals that GeForce 6800, at least on the paper, makes nVidia’s forerunners and ATi’s current cards look like weak low budget cards. The amount of transistors is nothing else then exceptional; 222 million transistors make NV40 the world’s largest chip manufactured in a quantity worth mentioning. The chip for itself is built on 0.13 micron technology and is produced by IBM. That it’s necessary with that many transistors is not really strange bearing in mind how many pipelines the card has. Another contributing factor is the new unit handling video which is included in the chip, more about this unit later on.
The GPU’s speed is at 400 MHz which makes it possible to reach an impressing fillrate when it comes to both pixels and texels. In this rate it won’t take long till we have to count terrapixels per second instead of megapixels. In practice this means that we’ll be able to run even higher resolutions than before and that high definition textures will work without performance losses to speak of. As a matter of fact, a high texture fillrate also increases the performance for anisotropic filtering which we are looking forward to.
For you who aren’t really familiar with this and wonder why the core is 75 MHz slower then the 5950 Ultra, it’s important to point out that the speed of the core is meaningless as long as we don’t know how many pipelines the card is equipped with. 5950 has four (4×475) while 6800 Ultra has sixteen (16×400).
The card is equipped with GDDR3 which has a clock frequency of 1.1 GHz. GDDR3 is a further development of GDDR2 which could be found on some of the cards from ATi and nVidia’s previous graphics card generations. With GDDR3 the power consumtion, and in even higher grade, the generation of heat has been drastically cut down. Maybe the card has a bit conservative clock frequency since 1.6 GHz GDDR3 are being produced at the moment (in what quantities we have no idea about yet though). At first sight, this makes the architecture look a bit unbalanced. They have quadrupled the cards fillrate compared to FX 5950 Ultra while the bandwidth has been increased by close to 16 percent. nVidia has insinuated that some of their card manufacturers will launch their cards with higher clock frequencies. We haven’t hard anything about how much higher yet, but if we were allowed to guess we’d say about 1.2-1.4 GHz.
As we mentioned in the introduction, the GeForce 6 are built on a completely new architecture. More likely revolution than evolution that is. The deep and sensitive pipelines from the GeForce FX series have been replaced with high performing parallelism. With its sixteen pipelines and two pixel shader units per pipeline a raw power exist here, which probably will be hard to match.
We’ll go through the remaining specifications later on in the article. Now let’s start with having a look at the card itself.
We can imagine that you are a bit curious of the looks of a NV40 based card. As a start, we can tell you that the image in the introduction was of an early prototype and as you soon will see the design has changed a bit. nVidia has often something funny going when they are about to deliver new graphics cards. This time the card came in a small briefcase made of metal:
I have to admit that I felt quite picked out when I opened the little briefcase and realised that I, as probably the only Swede, is sitting with an NV40 in my hands (ok, not “sitting”; dancing around, calling my friends and singing “I have a NV40! I have a NV40!”).
On the top of the card one can already begin to see that it has a pretty massive cooling solution. Almost the whole circuit board is covered by the giant cooler. What is interesting to note is that the card, despite the large cooler, does not weight much. If we turn it around there are less interesting things to look at. Obviously all memories are located on the topside, 8 x 32 MB to be exact.
If we take a closer look at the cooling solution we can see that it consists of a lot of small fins instead of some massive metal. The fan pushes the air through the thin metal and thereby effectively cools the card down. Under the fan itself and the exhaust we find a metal place with built-in heatpipes. The solution is still quite discrete compared to what NV3x required, in my opinion.
The noise is another story. When I first turned on the computer after installing the card I became dissatisfied to say the least. The noise level is clearly in class with the “vacuum cleaner” FX 5800 Ultra. But after the installation of drivers the fan spins down to “normal” speed as soon as you get into Windows. The fact is that we’ve never experienced that the fan has spun up other than when you boot the computer. The temperature is “low” and lies around 45 degrees Celsius while idle and 55 degrees at full load. The fan probably spins up when the card gets too hot but during the week we had the card it never happened.
In other words, the noise level is OK.
As we mentioned, the card is not identical with the one we showed on the picture in the introduction. This later revision that we have tested has a cooling solution which occupies an extra card PCI slot. Not fully optimal but on the other hand not quite terrible. It puts a stop for all of you who intended to install the card in some sort of Small Form Factor case though. Another thing that stops SFF fantasts is shown below…
We have the card’s outputs to the left. Here we find two DVI-ports, in other words perfect for you who have two TFT monitors. (Or a TFT monitor and a projector with DVI-input for example.) If you want an ordinary analogue signal you can, of course, use an ordinary DVI-VGA adapter. At the bottom left we have a S-Video-contact for TV-out and on future models also Video In, HDTV-component-out etc. We could unfortunately not test the card’s TV-out-quality since nVidia did not send us a cable and none of the cables we had worked.
To the right we see number two on the list of things which prevents one from using the card in a barebone system. The card requires two clean 12v molex contacts on separate cables. Furthermore they shall come from a power supply of 480 watts or more. We have tried with one at 430w and it worked fine so the quality of the PSU does also play a vital role. One thing is clear though and that is that the card requires a lot of power.
Before we move on with some tests it is time for the theoretical specifications again. We begin with Shader Model 3.0.
Pixel and Vertex Shader 3.0 have existed in DirectX 9.0 already since the start, although no hardware have had any support for the specifications. DirectX 9.0c which will be released within shortly will for the first time expose the new cards (and new functions in the old cards) fully. For nVidia’s sake this means that their support for Shader Model 3.0 will be visible. We therefore take a look at what Shader Model 3.0 means in theory and what we think about it in practice. nVidia continues to use the cineFX name despite that they have left the FX series behind them and with NV40 it is time to introduce CineFX 3.0. Much has happened since the last time.
Pixel Shader 3.0 in theory
To make things a bit more clear we begin with Pixel Shader 3.0. In the table below you find the minimum requirements for the different versions of Pixel Shader 2.0 and 3.0. We have chosen to exclude Pixel Shader 1.x and 2.0x (2.0a and 2.0b) to simply make the list a bit clearer.
Pixel Shader-version |
2.0
|
3.0
|
No. instructions (/Texture instructions): |
64 / 32
|
512
|
Executed Instructions |
64+32
|
65535
|
Full floating point precision: |
FP24
|
FP32
|
Dependent Read limitations: |
4
|
None
|
Texture Instruction limitations: |
32
|
None
|
Temporary Registers: |
12
|
32
|
Constant Registers: |
32
|
224
|
Instruction Predication: |
No
|
Yes
|
Dynamic Flow Control: |
No
|
Yes
|
Dynamic Branching: |
No
|
Yes
|
Backface Register: |
No
|
Yes
|
Arbitrary Swizzle: |
No
|
Yes
|
Centroid Sampling: |
No
|
Yes
|
FP16 textures and blending: |
No
|
Yes
|
Multiple Render Targets:
|
No
|
Yes
|
The list above is meaningless if we don’t know which cards support what. To make things clear:
-
Radeon 9500 – 9800 XT: 2.0
-
GeForce FX: 2.0(a)
-
GeForce 6: 3.0
Here we chose to proceed by Microsoft’s specifications instead of listing the implementations in different graphics solutions; we find it easier to present the information this way but at the same time it somewhat limits us a bit since no card fully follows the specifications listed. For example the Radeon cards have got support for some things which exceed the Pixel Shader 2.0 specification while GeForce FX on the other hand lacks support for some PS2.0 functions. We bring up some subjects during the end of this theoretical reading.
The list covers the minimum requirements. Some of the functions which are requirements in Shader Model 3.0, for example Multiple Render Targets, are impossible in Shader Model 2.0 but is not a requirement and is therefore seen as a “No” in the table above. Unfortunately Microsoft has no closer information about Pixel Shader 2.0a respectively 2.0b and the information which exists about 2.0x is such incomplete that we have chosen not to include it in the list. Right now Microsoft is listing a version up to DirectX 9.0b which is called Pixel Shader 2.0x and the specification for this version is very diffuse.
As you can see Pixel Shader 3.0 is a relatively large step forward from Pixel Shader 2.0. Pixel Shader 2.0a and 2.0b is something between 2.0 and 3.0 with a tending towards 2.0.
One of the major changes we are dealing with is Dynamic Branching. The technique means that Pixel Shader software now can contain IF-rows, i.e. the same method used in almost all other programming. In short you no longer have to run a program row by row but you can instead insert some IF’s to direct the program. It is not completely fine though since branching can in some cases mean performance problems or rather problems of effectiveness.
The other obvious difference is simply the ability to run longer Pixel Shader programs, which means that the program now can contain more instructions. The earlier limit in PS2.0, 96 instructions, is broken and now you can have 512 instructions. Compared to DirectX 8.1 (where Pixel Shader 1.1 had a maximum of 8 instructions, 1.2-1.3 a maximum of 12 and Pixel Shader 1.4 had a maximum of 14 instructions) this is a pretty astonishing number. Game developers reported pretty quick that they ran into the wall when it came to the limitation of 96 instructions so that the limit is pushed forward is a very welcome addition. We will of course probably not encounter any games with Pixel Shader programs where they count the number of instructions at thousands in the nearest future but that the limit, from a practical point of view, is pretty gone is nice to see.
Another new phenomena is that the minimum requirements of what is considered full floating point precision has changed from FP24 (96 bits) to FP32 (128 bits). However, the FX series could also brag about FP32 support, but the major difference with GeForce 6 is that they have now worked very hard to make FP32 the new “performance standard”. The bad performance which was associated with FP32 in the FX series is now only a memory in the GeForce 6 series.
You who have kept yourselves updated with Half-Life 2’s development have probably heard about Centroid Sampling by now. In short, it is a solution for a certain type of artifacts (graphical errors) which, often, occur when one uses MSAA. The problem is shown by “discoloured” pixels in FSAA-treated edges.
Second from the top of the list we have the ability to use FP16 textures and FP16 frame buffer blending. The most interesting possibilities here lie in the category HDR. High Dynamic Range, while it has been possible already in PS2.0-hardware, is taken to a new level thanks to this support. In other words we are mainly talking about lightning and while (S)RGB is enough (so far) to represent the final colours on our monitor internal calculations require more precision.
Last but not least we find MRT: Multiple Render Targets. With multiple render targets per pixel information can be stored in up to four buffers and then later being used together to create advanced effects.
Pixel Shader 3.0 in practice |
So what does all this mean for the consumer? Will the graphics be ten times better, will all the games be a lot faster? As usual when it comes to these types of specifications, games must first utilize the technique. It is mainly about performance though. The great masses of effects which are possible in 3.0 are possible in 2.0. The requirement of FP32 in 3.0 is one of the few things, except Centroid Sampling, which we can directly tie to better graphics (in contrary to better performance). Another thing which is definitely worth mentioning among the quality increasing functions is the possibility of using textures and frame buffer blending with partial (FP16) floating point precision. On the other hand this works both ways: the more performance, the more cool effects you can add.
With Pixel Shader 3.0 we have taken one step closer to offline-rendering quality one more time.
Despite all talk about pixels we can not forget our dear vertices; in other words, it’s time to take a look at Vertex Shader 3.0.
Pixel Shaders is what you often hear about when talking about shaders, perhaps it is because of the effects that you can achieve are so obvious. However, Vertex Shaders is a very important part of today’s programmable graphics cards. So far it has most been about performance but there are also things which are directly related to the possible effects.
Vertex Shader 3.0 in theory
Just as in the table for Pixel Shaders we have chosen to exclude version 1.x here.
Vertex Shader version |
2.0
|
3.0
|
No. instructions: |
256
|
512
|
Executed Instructions |
65535
|
65535
|
Temporary Registers: |
12
|
32
|
Instruction Predication: |
No
|
Yes
|
Dynamic Flow Control: |
No
|
Yes
|
Dynamic Branching: |
No
|
Yes
|
Vertex Textures: |
No
|
Yes
|
Vertex Stream Frequency: |
No
|
Yes
|
Also on the Vertex Shader front the number of instructions has increased. We haven’t found the maximum number of VS 3.0 since Microsoft are a bit diffuse about this, the minimum is at 512 though which is a twice increase compared to earlier versions.
Now Vertex Shader 3.0 also got support for Dynamic Branching, Dynamic Flow Control and Predication, just as Pixel Shader 3.0. For details you can read the information about Pixel Shader 3.0 so I don’t have to repeat myself.
The major change is called Vertex Textures. Vertex Shader programs can now read textures directly which perhaps first brings the thoughts to what you might call dynamical Displacement Mapping. With Displacement Mapping you for an example can take a greyscale picture where different nuances represent different heights. Then you tessellate an area and deform it after this height map. The method looks like the one used when you do bump mapping, but the big difference is that here you actually deform the surfaces and not just simulate the deformation with 2D effects.
A function increasing effectiveness in VS 3.0 is Vertex Stream Frequency, with this technology one can decide how many times per vertex one want to call a vertex shader. This way you can streamline the work by avoiding unnecessary “calls”. Except streamlining it can also be used to diverse. With a technology that nVidia calls Geometry Instancing you can let multiple objects be treated at different times.
Vertex Shader 3.0 in practice |
When it comes to Vertex Shader 3.0 the new specification will first mean better performance but thanks to Vertex Textures new opportunities are also opened. Displacement Mapping will hopefully have a breakthrough itself with Vertex Shader 3.0.
On the whole Shader Model 3.0 is a pretty big step forward. Both when it comes to performance and graphical quality. We know one company that will say the opposite though. The final evidence simply lies in how many games which has got support for the technique. We have spoken to a couple of game developers and know about five game developers and at least three, four specific titles will have support for Shader Model 3.0 so the future does actually look bright on this point.
Turn the page to read about more programmable parts on the GeForce 6.
Most of the graphic chipset manufacturers today have shown a wish about making cards for more than just 3D. GeForce 6800 Ultra continues the trend by offering a programmable video processor in the chip.
The new, so to speak, sub-processor has the ability to hardware accelerate encoding and decoding of a long row of popular formats. MPEG, Windows Media 9, DivX and other MPEG4-formats are on the list. Except acceleration we have also got other nice things like real time post processing (i.e. effects which are applied on a video while it’s being played), advanced de-interlacing, high quality up- and down-scaling and more. What is being worked on can be adjusted depending on what the computer is working with. Some things will of course still be handled by the CPU and it is still in the ordinary RAM memory where the information is stored. The performance increases can become as large as 60 %, even up to 90 % if we are to believe nVidia. However, most people have enough computer power to play back whatever they want. The major pros are shown first while multitasking.
Except performance increases we obviously have the quality aspect. With adaptive per pixel de-interlacing, de-blocking and other post processing effects there is potential to make video and film look better than what itdoes today.
Competing products got similar solutions but they go through the card’s ordinary pixel pipeline, in nVidia’s case we’re talking about a completely separate unit which handles video. What makes the solution different is also that it is programmable so that new effects and support for new formats can always be added later.
The elegant with this solution is that nVidia’s drivers simply “catch on” Microsoft’s DirectShow, the part of DirectX which handles video. That way the applications do not even have to be aware of the graphic card’s existence to be able to use these functions.
We could need a bit more specific documentation from nVidia before we can evaluate the functionality and make regular performance measuring but it looks very promising. What we can already evaluate today is FSAA and Aniso; turn the page for more information.
Two features which is everyday food since a few years back is FSAA and Aniso. These two functions make it possible to drastically increase the image quality by reducing “thorny” (“pixly”) edges and increase the detail sharpness on textures. Together they make miracles for the final image. Everything is not straight forward though; there is many ways of implementing these two functions. On this page we go through the two techniques and present what they can do in theory with synthetic tests.
We begin with explaining this with Rotated Grid Anti Aliasing. The technique is hardly new since it saw its first light in the Voodoo 5. To make examples of the differences with Ordered and Rotated we can look at the two images below:
|
|
Take a look at how many lines are cut through by the two different solutions (more lines = better sub pixel coverage). Rotated Grid Anti Aliasing looks better than Ordered Grid in almost every imaginable scenario if we consider the whole image. With 4x Ordered Grid the NV40 takes 4 samples per pixel with a 2×2 sub pixel pattern, but with Rotated Grid you can sample the pixels with a sub pixel pattern of 4×4.
Everything is not good with this though since you must have good positions of these samples.
Take a look at the images below to see how the 4x-setting has been developed since last:
|
|
Below you have a couple of comparing picture of ATi’s and nVidia’s sample patterns:
|
|
|
|
|
|
As you can see on the left, ATi and nVidia’s sample patterns for 2x MSAA are very similar (but rotated in different ways). When we turn up to 4x the image is somewhat changed. nVidia has rotated all of their samples equally much while ATi as shown has more varied positions. At nVidia’s 8x and ATi’s 6x the situation is completely new since nVidia introduces SSAA in the game. Their sample pattern here is far from ideal considering that two samples are more or less taken from identical positions. On the ATi’s side we see that their FSAA method hardly goes under the category Rotated Grid since the samples are spread in a more “chaotic” way, the layout (if we disregard the lack of more texture samples as in nVidia’s case) is clearly more desirable.
As we have mentioned earlier, one thing is “missing” in nVidia’s FSAA implementation if we compare with the closest competitor ATi. The function we are talking about is gamma correction and as usual it is simpler to explain the principle if we have images to work with, please examine the two images below:
|
|
As you can see above the gamma corrected transition from black to white is much more nuanced than the not gamma corrected one. The black tones in the left image are more or less identical which results in that if they’ve been seen along the edge of a polygon it would have given a more “thorn” appearance than the corrected image to the right. nVidia has chosen to skip this, in our opinion, valuable function. However, it’s worth mentioning that nVidia has got support for gamma correction on other places in their GPU but not for FSAA samples.
Finally we also have got Centroid Sampling which we covered in the previous page. A problem with today’s multi sampling (in the majority of games) is that samples are sometimes taken from the wrong texture in a texture atlas (a texture atlas is a large texture with several smaller texture in it). Because of this some samples are “discoloured”, often visible as way too bright pixels, randomly placed, through a polygon’s edge. With Centroid Sampling you can avoid this by sampling textures that lies outside the polygon’s surface you want to sample.
To just inspect sample patterns and the theory behind gamma correction makes no one happy and therefore we have tested the cards with Tommti-Systems FSAA-tester:
|
|
|
|
|
|
|
As we can see the cards produce a quite similar image quality when we activate 2x FSAA but if we look close it is ATi who takes the first place. With 4x FSAA the trend continues, ATi is ahead but the margins are far from large. Even when we activate 8x on NV40 and 6x on R360 nVidia are again beaten when it comes to antialiasing. On the other hand, the 8x has elements of super sampling which makes that the quality of textures also increases, more about that later.
In short nVidia has made progress but they still have a bit left before they can reach ATi.
We thought that it might be interesting to see how nVidia’s new 4x rotated grid looks and therefore we compared 4x FSAA on a 5950 Ultra against our 6800 Ultra:
|
|
The quality has increased indeed. However, the change is a bit less dramatic than what we expected.
To summarize, the AA-quality has become better but not good enough compared to ATi. What’s missing is most likely gamma correction and better balanced sample patterns. The lack of MSAA over 4x also gets reminded. I might look finicky, but honestly I expected more. The FSAA quality on the GeForce cards is, with exception for now having rotated grid in 4x, in principle unchanged since the GeForce3 time.
Now let us have a look at how the card’s anisotropical filtering is.
When it comes to anisotropic filtering nVidia has made two progresses, really just one if you look at the whole form a hardware point of view. First they have now support for more aniso-settings: 6, 10, 12 but foremost 16x. ATi has offered this (16x) for several years now and on those cards the difference in quality between 8x and 16x is hardly noticeable.
The other new settings are not available in the card’s control panels (yet?). But a game manufacturer can, if the drivers support it, set the new levels (6, 10 and 12x) if they want. For meticulous performance balances it is of course nice with several choices. (If nVidia don’t themselves expose these settings in their control panels the chance is very big that developers of so called “tweakers”, such as RivaTuner, will do it pretty soon.)
The other difference here has really nothing to do with the hardware and that is that nVidia has added a new quality setting in their control panels which they call High Quality. With this setting all so called “adaptive” texture filtering is turned off and instead “pure” filtering is offered without any performance optimizations. Strangely enough we haven’t noticed any differences between Quality and High Quality, there is a new setting though: Trilinear Optimizations which gives a better quality if it isdeactivated.
Below you can find a row of images where we compare the different card’s AF with each other. For you who are not familiar with aniso, mip-map levels and such it might require a small explanation: What defines a good image here is that the color levels are moved further back/into the image and the overlaps between the different color levels are as even and smooth as possible. We have made thorough tests between High Quality and Quality but since we haven’t noticed any difference (not one single pixel) we have here chosed to view the results with and without nVidia’s Trilinear Optimizations (below referred as “TriOpt On” or “TriOpt Off”).
|
|
|
The differences when we only use trilinear filtering are hardly massive. ATi’s solution comes out a bit by having a bit higher Level Of Detail. With nVidia’s Trilinear Optimizations on we see that the boarders between the mip-map levels are quite sharp. In some games this is shown as a band along the mip-map levels in style with those which are seen when running with bilinear filtering. Thanks to the new setting we can turn off the optimizations and get an “ordinary” trilinear filtering though.
|
|
|
|
|
|
When we activate anisotropic filtering the differences between nVidia’s two settings become larger. As you obviously see on the images it takes one to turn off Trilinear Optimizations to make the image quality comparable to ATi’s. It is also interesting to see how similar nVidia’s new aniso implementation is to ATi’s. At multiples of 22.5 degrees the AF level is drastically decreased on nVidia’s card. At ATi’s card the same happens at 25 degrees. To say that one of them is better is impossible.
As familiar games don’t usually consist of that you look down through a tunnel so therefore we have taken a couple of picture on “plain ground” too:
|
|
|
Yet again it is obvious that we have to turn off nVidia’s optimizations to get a more comparable picture. At least with these theoretical tests.
At lest equally important is how NV40 differ compared to it’s predecessor, below we have compared NV40 (6800 Ultra) to NV38 (FX 5950 Ultra):
Note that we have taken the pictures with the two card’s standard settings for texture filtering quality.
|
|
|
|
|
|
The first picture to the left is interesting since we can see that the transitions between the mip-map levels looks more or less untouched between the two cards. But note that NV38 has got a bit more aggressive filtering at 45 degrees angle, the exact opposite to what it becomes when you activate anisotropic filtering on the image in the middle. There is not much to say about the picture in the middle, really. Here NV40 is the one that acts favourable.
In the end it is a somewhat mixed impression. That they like ATi now let the aniso quality be limited at certain angles increases the performance of course but at the same time it decreases quality a bit. That we now have full control over trilinear filtering is a plus and the new 16x setting is a welcomed addition even if it doesn’t give any gigantic differences.
As it looks now nVidia’s anisotropic filtering is similar to the one ATi offers. How it compares to older GeForce cards is difficult to say since nVidia has taken steps both forward and back on the quality front.
Maybe it is so that the new High Quality setting is ment to control the angle dependency? We actually don’t know but we but suspect that the setting is for something and so far we haven’t found out what. No matter if you set it at High Quality or Quality it stays at Quality in nVidia’s little tweak program which is in the taskbar.
On the following page we will take a closer look at how the different FSAA and Aniso settings perform relatively but also what impact it has on image quality in games.
As you will read later on in the article, one of our reference cards broke before we really started the review. Because of that reason some of our more detailed tests lacks results from 5950 Ultra. These tests were the ones where we evaluated the different levels of FSAA and Aniso and with that we’ll present these tests on this page.
We’ve run all tests in three different resolutions but as you will notice many of the tests in 1280×1024, all apart from two to be exact, has been absent since problems occurred at this resolution in UT2004. In the right column you see the performance differences in percent:
Card: |
GeForce 6800 Ultra
|
Radeon 9800 XT
|
Difference:
|
Without AA/AF: 1024×768 1280×1024 1600×1200 |
174.5
170.8 166.1 |
188.2
145.4 105.3 |
-7.3 % 17.5 % 57.7 % |
2x AA: 1024×768 1280×1024 1600×1200 |
172.5
169.2 164.4 |
178
125.2 89.8 |
-3.1 % 35.1 % 83.1 % |
4x AA: |
172.5
167.5 149.6 |
164.7
– 77.9 |
4.7 %
– 92 % |
8x/6x AA: 1024×768 1280×1024 1600×1200 |
78
49.5 30.4 |
133.7
– 61.7 |
-41.6 % – -50.7 % |
2x AF: 1024×768 1280×1024 1600×1200 |
174.4
169.8 157.9 |
166.4
– 87 |
4.8 % – 81.5 % |
4x AF: 1024×768 1280×1024 1600×1200 |
171.2
167 136.6 |
150.6
– 76.5 |
|
8x AF: 1024×768 1280×1024 1600×1200 |
170.3
162.6 126.6 |
140.8
– 73 |
20.9 % – 73.4 % |
16x AF: 1024×768 1280×1024 1600×1200 |
169.9
160 124.4 |
139.5
– 72.3 |
21.2 % – 72.1 % |
2xAA/4xAF: 1024×768 1280×1024 1600×1200 |
169.4
162.8 128.3 |
137
– 68 |
23.6 % – 88.7 % |
4xAA/8xAF: 1024×768 1280×1024 1600×1200 |
164
140.7 103.9 |
116.6
– 58.8 |
40.7 % |
8/6xAA/16xAF:
1024×768 1280×1024 1600×1200 |
53.9
35.6 23.1 |
101.1 – 50.2 |
–46.7 % – -54 % |
GeForce 6800 Ultra performs incredibly well, no doubt about that, in 1600×1200 it’s approximately 80 % faster then a Radeon 9800 XT on average. In 1024×768 there are differences too, but they are somewhat more modest. That a card can perform almost twice as fast as the former king, 9800 XT, is nothing else but amazing. It was really long ago we saw this big performance increases from a new graphics card. The last time I remember is Radeon 9700 Pro vs. GeForce 4 Ti 4600.
But as you see, the GeForce card comes in trouble as soon as we enable 8x FSAA. When this super sampling + multi sampling hybrid is being activated, the performance nose dive way below R360’s level. However, the super sampling has some advances since it increases the texture sharpness and decreases the texture aliasing.
We’ve had problems with taking screenshots with FSAA enabled on the NV40 in full screen mode and we also had another problem with ATi’s drivers in Open GL. When we had taken all screenshots we realized that there were no FSAA on any of nVidia’s screenshots and that ATi’s screenshots was all without Aniso because of some other inscrutable reason.
Sadly this means that further screenshots won’t take place at the moment. We intend to fix this as soon possible.
Before today’s tests we have upgraded our test system a bit. We have got another Western Digital Raptor (we now have 2 in RAID 0).
Test system
|
|
Hardware
|
|
CPU: |
AMD Athlon XP 3200+ (400) Mhz
|
Mainboard: |
ABIT AN7 uGuru (nForce2 400 Ultra)
|
RAM: |
768 MB DDR400 @ 2-5-2-2 Timings:
3x 256 MB Corsair TWINX512-3200LL DDR-SDRAM |
Graphics card(s):
|
Reviewed cards: Reference cards: |
HDD:
|
RAID0: 2x 37 GB Western Digital Raptor 10 000 RPM (SATA, 8 MB cache)
|
Sound card:
|
Creative Soundblaster Audigy 2 ZS Platinum Pro
|
PSU: |
Tagan TG480-U01 480w
|
Ethernet:
|
3Com 10/100 |
Software
|
|
Operating system: |
Windows XP Professional (Service Pack 1 + updates)
|
Video drivers: |
nVidia: Forceware 60.72
ATi: Catalyst 4.4 |
Other drivers: |
nVidia ForceWare UDA Chipset Drivers v3.13
|
Benchmark applications:
|
Unreal Tournament 2003 (v2225) |
We have chosen to include two reference products today; GeForce FX 5950 Ultra and Radeon 9800 XT. Sadly, our 5950 Ultra crashed during the tests; hence we only have three games on the list (plus some other things) which we only could test with the GeForce 6800 Ultra and the Radeon 9800 XT.
* Since there has been a whole lot of talking about 3DMark03 and different optimizations we’ve chose to only test 3DMark03 with the 6800 Ultra. The only reason is to please the curiosity we know are out there among our readers. We at NordicHardware does not consider these test results interesting at the moment.
Since we have been limited both when it comes to time and for the reason that our 5950 Ultra crashed we have chose to only test the card in our regular test bed existing of a long row of games. Unfortunately we are limited to test results in the resolution 1280×1024 with 4x FSAA and 8x Aniso. In a future article we will publish more complete results.
In all tests we use the resolution 1280×1024 with 4x AA (Anti Aliasing i.e. edge smoothening) and 8x AF (Anisotropic filtering i.e. advanced texture filtering) if nothing else is mentioned. With ATI’s card we use Quality Aniso and with nVidia’s card we use Trilinear Optimizations off and the setting Quality. We have chosen the resolution 1280×1024 (or 1280×960 where 1280×1024 isn’t an option) since it’s a reasonable setting when it comes to performance but also because our readers probably has monitors that can handle this resolution.
In the tests where 1280×1024-4xAA/8xAF turned out to be too demanding we have chosen to first lowering the resolution to 1024×768, if it still is not good enough the resolution will be lowered even more. The third and last alternative is turning off AA/AF.
After running the actual performance tests we spent about 30 minutes (sometimes much more than that) with sitting down and really testing the game by playing it to see how it works in real life.
|
Quake 3: Arena
|
We test the Open GL-game Quake 3 to evaluate the performance of older titles. A large amount of titles are built on the “Q3”-engine. We use the test demo four.dm_67 in the test utility Q3Bench.
|
|
Game engine: |
Open GL (DX7 level)
|
Pixel Shaders: |
No
|
Vertex Shaders: |
No
|
|
6800 Ultra impresses with 50 % better performance than our 9800 XT. Old 5950 Ultra ends right between the two contestants with its 250 fps.
Subjective analysis: Of course you don’t need 300 fps to make the game work well. But with 300 fps in average you also have a much higher fps with the more demanding situations than a card with 100 fps in average has. That means plenty of room to raise the level of both FSAA/Aniso and resolution.
|
Unreal Tournament 2003
|
UT2003 is a DirectX 8-game which puts the graphic cards under a lot of stress with large textures, high amount of polygons and more. A multiple of games are built on this engine. We are using the more graphic demanding flyby-test. We test two different maps in the game: Bifrost and Inferno.
|
|
Game engine : |
Direct3D (DX8.1)
|
Pixel Shaders: |
No (1.1 and 1.4)
|
Vertex Shaders: |
No (1.1)
|
|
Radeon 9800 XT has been at the top for several months, but 6800 Ultra takes over with ease. The increase in performance compared to 5950 Ultra is nothing else but astonishing!
Subjective analysis: UT2003 is a delight with 6800 Ultra. Even in higher resolutions if works absolutely flawless.
|
WarCraft 3: Reign of Chaos
|
WarCraft 3 is one of this year’s best-sellers which make it a good object for testing. Even if the graphics lacks extravagant technique the game is quite demanding. The performance tests are made on the first map in the WC3-demo with FRAPS.
|
|
Game engine: |
Direct3D (DX8.1)
|
Pixel Shaders: |
No
|
Vertex Shaders: |
No
|
|
There are no major differences in Warcraft 3, due to its CPU limitations we don’t expect a great deal here. NV40 takes the lead but hardly with anything that can be called remarkable.
Subjective analysis: In my opinion you don’t need more than about 30 fps in a RTS. Most cards on today’s market can do that without a problem and 6800 Ultra is of course no exception.
|
Mafia: The City of Lost Heaven
|
Mafia is built on an own developed Direct3D motor and use large amounts of relatively low quality objects and on that way make a large wealth of detail. Similar game motors can be found in for example the GTA series. To measure the performance we’ve run Free Rides first level and used FRAPS
|
|
Game engine: |
Direct3D (DX8.1)
|
Pixel Shaders: |
Yes (1.1)
|
Vertex Shaders: |
Yes (1.1)
|
|
As usual nVidia’s card’s has a hard time keeping up with the competition in Mafia. We don’t really know the reasons but it’s obvious that it’s more than raw GPU power which makes a difference here. But it’s worth mentioning that NV40 is a big improvement compared to NV38.
Subjective analysis: The performance difference between 9800 and 6800 is larger on the paper than how they are for real. The game flows fine on NV40 despite the second position.
|
Comanche 4
|
Comanche 4 is built on an own Direct3D motor which use Pixel Shaders and high definition textures. The game is one of the few which really “needs” a graphics card with 256 MB. We test with the benchmark tool in the downloadable demo.
|
|
Game engine: |
Direct3D (DX8.1)
|
Pixel Shaders: |
Yes (1.1)
|
Vertex Shaders: |
Yes (1.1)
|
|
Unfortunately, we are limited by the CPU again in Comanche 4. However, not more limited than that 6800 takes the first position with a relatively good margin to be this test.
Subjective analysis: Of course Comanche 4 flows good on all the three tested cards. However, we have a bigger margin to raise the resolution on the 6800 Ultra.
|
Counter-Strike
|
Counter-Strike doesn’t need much of a presentation. However, talks say that the new 1.6 version is more demanding then previous version and that is why we chose to test it. The test is made of a demo of the map de_aztec with 18 players all together. |
|
Game engine: |
Open GL (DX6 class)
|
Pixel Shaders: |
No
|
Vertex Shaders: |
No
|
|
The CPU sets the limits, again. For the CS fanatic it won’t make much of a difference which card to choose.
Subjective analysis: CS flows on all our cards. Not much more worth mentioning about this.
|
Battlefield 1942:
Secret Weapons of WWII |
Developed in Sweden, Battlefield 1942 is still a very popular online multiplayer game and therefore we feel that it’s important to test it. Once again, we use FRAPS and test the downloadable demo. |
|
Game engine : |
Direct3D (DX8.1)
|
Pixel Shaders: |
No
|
Vertex Shaders: |
No
|
|
Battlefield 1942 has been one of the Radeon series triumphs so far. NV40 ends this trend and delivers a remarkable improvement compared to what GeForce FX 5950 Ultra had to offer. However, any major difference between the 9800 XT and 6800 Ultra does not exist.
Subjective analysis: Compared to 5950 Ultra it’s just wonderful to play Battlefield on our NV40. If we instead compare with Radeon 9800 XT the difference is minimal to its extent.
|
Tomb Raider:
Angel of Darkness |
Tomb Raider is the first commercial game using DirectX 9.0 fully for rendering graphic, and that makes it an interesting test object. The test is performed using the latest patch with built-in tools. The level Prague3a was chosen for the test. |
|
Game engine : |
Direct3D (DX9)
|
Pixel Shaders: |
Yes (1.1, 1.4 and 2.0)
|
Vertex Shaders: |
Yes (1.1 and 2.0)
|
|
Time for another gigantic increase in performance compared to previous generation. ATi’s existing badass still keeps the fort and isn’t moved in any particular way.
Subjective analysis: Tomb Raider works really well with 6800 Ultra, almost as good with the 9800 XT but the pretty crappy with 5950 Ultra. Despite the small differences NV40 is actually a very noticeable increase from what we have experienced with playing with the 9800.
|
Star Wars Jedi Knight:
Jedi Academy |
Jedi Academy is the follow-up to the popular Jedi Knight II. It’s based on the Q3 engine but has high resolution textures and more light effects. We have tested a demo recorded by ourselves on the map Traspir where we met 7 opponents. |
|
Game engine: |
Open GL (DX8.X?)
|
Pixel Shaders: |
Yes? (1.x?)
|
Vertex Shaders: |
Yes? (1.x?)
|
|
It doesn’t take genius to figure out that we are limited by our CPU again. The card manages a small increase in performance though.
Subjective analysis: In Jedi Academy we don’t experience any differences in performance. To make the differences noticeable you probably have to raise the resolution at least one step.
|
Halo: Combat Evolved
|
Halo is what you could call the first real worthwhile DirectX9-game which of course makes it very interesting. We benchmark the performance by adding the command line -timedemo which measures performance in the game’s “cut scenes” which in turn gives a reasonable overview of how the card performs in the game.
|
|
Game engine : |
Direct3D (DX9)
|
Pixel Shaders: |
Yes (1.1, 1.4 and 2.0)
|
Vertex Shaders: |
Yes (1.1)
|
|
The review’s most impressing result we find in our second last game test. NV40 shows its raw power and completely crushes our two reference cards without any mercy. NV40 is more than twice as fast as the NV38 here. Hopefully this is a trend we will see more of in future DirectX9-tests.
Subjective analysis: The difference in performance here is remarkable. The game has a much better flow with NV40 than any other card that is available today. More of these things please!
|
Max Payne 2: The Fall of Max Payne
|
The sequel to the incredibly popular Max Payne developed by Finnish Remedy. We test performance by benchmarking with FRAPS in a so called “cut-scene”. The game uses the spectacular Havok-physics engine and Pixel Shaders among others.
|
|
Game engine : |
Direct3D (DX9)
|
Pixel Shaders: |
Yes (1.1 and 1.4)
|
Vertex Shaders: |
Yes (1.1)
|
|
Last among the tests is Max Payne 2 where the 6800 takes the lead again. Yet another time we see remarkable increase over the 5950 but a less drastic increase over the 9800.
Subjective analysis: The 6800 Ultra has the best flow of course but the difference to 9800 isn’t that big with this resolution.
Concluding words about the game tests |
GeForce 6800 Ultra is, as expected, the fastest graphics card we have ever reviewed here at NordicHardware. The increase in performance over the GeForce FX 5950 Ultra is everything from a few percent up to over 100 %. The differences will become even more remarkable when we get more DirectX9 tests.
The overall differences towards 9800 XT are less dramatical but they are there and it is very noticeable in many tests. The most impressing is of course the tests we did with 1600×1200, the products of today simply don’t have a chance against 6800 Ultra which sometimes is almost twice as fast!
Sadly our 5950 Ultra broke which limited the tests we were able to do with 1280×1024. If we had been able to test 1600×1200 as well, 6800 Ultra’s advantage would be even greater. However, we feel that we are doing the readers a disservice by pushing to hard on the 1600×1200 results. According to our reader surveys the majority of our readers have either a 17-19 inch CRT or a 17-19 inch TFT. None of these monitors are directly capable of running 1600×1200. Of course some of the CRT-screens can handle the resolution but only at unsatisfactory refresh rates.
The thing we have to wait for before giving a proper evaluation are partly the new games Doom 3 and Half Life 2 which soon should be available but also ATi’s upcoming card. When we have two of these things in our testlab we can make a sure judgment. By the looks of it now, GeForce 6800 Ultra is the world’s fastest graphics card. We have hardly been able to find any game, independent of settings, which don’t have a good and steady fps with this card. You really can just plug-and-play. To me personally it means 1280×1024 with 4x FSAA and 16x Aniso in all the games I own, even new titles such as Far Cry works perfectly with these settings.
In an upcoming test we will do more performance tests in various resolutions and more varying levels of AA/AF. We will also include tests from e.g. UT2004, Far Cry and other popular games.
For those of you who now have read the entire review and still wonders how many 3DMarks the card can perform we have something for you: 11688. With faster processors we have seen results over 13k. But until Futuremark has certified any drivers for NV40 we have to take these results with a pinch of salt.
For those of you know who doesn’t think NV40 is fast enough we have now reached the overclocking section.
Since we are reviewing a reference card today, which is most probably not the final product you will find in the stores in the future, you have to take our overclocking results with a pinch of salt.
For overclocking we use Coolbits to test the stability and to search after so called artifacts we use the Mother Nature test in 3DMark03. RivaTuner which we usually use didn’t work at all, if that is because of the drivers or the hardware is not certain yet but the fact is that no program at all succeeded in reading the clock frequencies correctly. As an example, 3DMark03 reported that we had a core at 0 MHz and that the memory was going in 70 Hz.
Product
|
Regular
|
Overclocked
|
Percent
|
nVidia 6800 Ultra
|
400/1100
|
446/1170
|
11.5/6.4
|
As you can see we have got anything but impressive results. If we turn it the other way round, it can be seen as following:
If you overclock the core on a 5950 Ultra with 46 MHz you increase the card’s pixel fillrate with 184 MP/s (4×46), when we overclock our 6800Ultra with the same amount of MHz we get an increase of 736MP/s(16×46)instead. No matter what, 11.5 % is not an impressive result.
The memory is even worse and only succeeds an increase of 6.4 lousy percents. Hopefully the third part manufacturers will be our saviors here.
If we instead bring the positive side to the topic, it is that the cards temperature was not affected nearly at all of our overclocking. One might also argue that one don’t “have” to overclock a NV40, but to be honest there are not many people who really overclock because they “have to”.
It is time to close the shop. After 15 pages we feel that it is time to summarize this article.
So what is the final verdict?
First of all, it is the performance that impress. The card is, as expected, completely unbeatable in almost everything we tested (with one exception only). The more eye candy and the higher resolutions, the larger the differences become. Sadly it was not 100% visible in our preview since we were limited to 1280×1024. The card’s actual “sweet spot” is at 1600×1200 with 4x FSAA and 16x Aniso enabled, in these settings it is simply completely amazing. The improvement in performance is as high as 100%, in some theoretical tests one can see figures even higher then that.
The thing that impress the most if we disregard high resolutions with a lot of AA/AF, is of course the DirectX9 performance. As we see in Halo and Tomb Raider the performance has really gotten an incredible boost, so everyone who was scared after the FX series terrible DX9 performance can now take a breather.
In short, GeForce 6800 Ultra is the world’s fastest graphics card right now and that with knobs on. Obviously, the big question is if ATi will be able to counter it with its R420, as it looks now it will be a tough fight.
In spite of its extremely good performance there is one thing which nibbles a bit and that is the speed of the memory. As we mentioned earlier one get a small feeling about that the card’s design is a bit unbalanced. The two most important factors traditionally seen are the bandwidth and the fillrate. While we see an extreme increase of the fillrate the increase of bandwidth is however more moderate. Lucky enough nVidia has already from the start called attention to that there are manufacturers which will remedy for this.
As important as the performance is the image quality. We can see that nVidia has taken another step forward to increase the image quality some steps further. The most clear addition is the support for 4x Rotated Grid Anti Aliasing. ATi’s solution is still a bit better in my honest opinion, but as soon as the resolution gets to 1280×960 or more the differences becomes hard to see to say the least. One step in the right direction but a step that could have been longer so to speak. It is really nice to see that nVidia keep the 8x level, which combines MSAA and SSAA but we think it is somewhat astounding that they have chosen to take 4xS away which would have given almost equally good image quality for a fraction of the performance loss. (Hopefully tweakers like RivaTuner will be able to help us here though.)
The second point is obviously the support for 16x Anisotropic filtering and the fact that nVidia now also let one take full control over the trilinear filtering. Genuine trilinear filtering is something which has been missing since GeForce FX was launched at the first place, and it feels really good to see a comeback with the GeForce 6. On the other hand nVidia has chose to, as ATi, implement an angle dependent Aniso solution, which means that as we take two steps forward, we also take one step backward.
The new functions in Shader Model 3.0 will in the future probably give us games with more tidy effects as we have seen before. Anyway, the demos and examples we have seen so far look really promising.
If we go over the things that have no direct link to 3D, changes have occurred here too. First and foremost is the new video engine, which we find exciting. With support for hardware accelerated post processing, encoding and decoding this is a thrilling technology we have to deal with. The problem is that we have not been able to evaluate the technique yet, but we are working on it.
Next non-3D feature on our list is as simple as the card’s outputs. It is really pleasant to see that nVidia has chosen to make double DVI outputs as a standard on the GeForce 6. Remaining, the 2D quality is perfect, likewise the performance even if the increase is not even close as dramatic as when it comes to 3D.
Another region, and now we are going into such things which do not have a direct relation to hardware, is the new ForceWare drivers in the series 60.xx. There is a lot of improvements which we really like. Support for application specific settings is something we should have had a few years already so it is with joy we see them appear in nVidia’s drivers. Likewise the support for multiple monitors has become a lot better in the latest version of nView. For all AMD64 enthusiasts out there we have good news since nVidia has a lot going on this front. There is a lot left to optimize in the drivers and nVidia will continue to release open betas.
That nVidia is very dedicated to 3D gaming (3D glasses, 3D displays etc.) might not concern that many, but we know that there are enthusiasts who really takes a breather now when they hear that nVidia has large plans for this technology.
If we instead move over to the negative aspects it does not have to do with the chip itself directly, more likely the physical aspects which bugs us. The requirements of the buyers PSU’s are crazy. However, the card worked flawlessly on a computer with a 430w PSU but on the other hand we only had CPU, mainboard, memory and two harddrives fighting for the power. Apart from the specifications itself the demand of two “clean” cables is a thorn in the flesh. I use the card in my personal computer right now and I simply do not have two free cables so in the end I had to disconnect one of my IDE units. However, I have tried to reconnect the IDE unit and I now let it split one of the cables with the GeForce card, until now it seems to work well, but I can not say it feels completely satisfying keeping in mind how strict nVidia is on their demands.
The other two aspects have to do with the cooling. First it is a little irritating that we can not get away with this dual slot cooling, and then we have the sound level. But to our big delight it is really not common that the card’s fan spins up to its fast level. The fact is that we are generally astonished over how cold the card is keeping itself even that the fan is going on its quiet level.
On the whole it is no giant errors we have here. For those of you who want to build a monster computer with the best available performance there is not a question about that it is worth the extra trouble.
GeForce 6800 Ultra is an incredibly good card. The performance is extreme, the image quality has done improvements since the GeForce FX series, the support for Shade Model 3.0 impresses and the new video unit is just one of the many dots over the i. If you want the top-notch graphics card on the market as for today it is nVidia that supplies it. The question is only how long this will be valid. When you read this review I am on my way to find it out…
Anyway, after a minor miss with the FX series nVidia is really back, and that with an emphasis on really.
You can comment the article here.