Original Link: https://www.anandtech.com/show/418

3Dlabs Oxygen GVX1 PCI

by Anand Lal Shimpi on December 1, 1999 3:26 AM EST


Right now, the 3D graphics market is in a bit of a mess. For years, there was a clear distinction between workstation level graphics cards and those that you would find in your own personal system. The distinction was made not only in performance but in price as well.

The idea of bringing workstation level graphics performance down to the desktop PC has been around for years and years, but recently, with the advancement of 3D graphics hardware technology, this has become a very possible reality. It was no longer necessary to spend tens of thousands of dollars in order to achieve that high level of performance on a $3000 PC. But this new technology wasn't the end of it.

The gaming industry has been pushing for higher fill rates, greater polygon throughput, and better image quality in order to make the overall gaming experience more pleasant. In the process, hardware manufacturers have been pushed to cope with 6 month product cycles and left with the task of taking the high end workstation level graphics performance and packaging it in a chip that can be sold on a card for less than $300.

The thought of the power of a high end graphics workstation being sold for under $300 is a scary thought to manufacturers that have based large amounts of their profit on expensive graphics accelerators that tailor to the needs of the professional community. However, this sort of intense competition is present throughout the industry, even within the high end community.



Enter the GVX1

3Dlabs is one such contributor to this sort of competition as they have always strived to bring the power and performance of the most powerful graphics accelerators down to a desktop workstation level without the high price tag. Their Oxygen line of video cards is intended to tailor to this very market, especially the GVX1, which was released earlier this year.

The Oxygen GVX1 offered a sub $1000 graphics solution that was providing some serious competition to alternatives priced in the $1000+ range, but one of the true benefits of its architecture has been left hidden until very recently. As we're all familiar with, the current AGP implementation on all x86 chipsets that support it only allow for a single AGP slot. While this isn't a problem for most gamers and other enthusiasts, for professionals that want to run their professional 3D applications across multiple monitors without sacrificing performance, they are left in an interesting situation.

From the point of view of the manufacturers, they have a few options in this scenario. One is to include multiple VGA outputs on their boards (à la G400 DualHead), but this method splits the performance of the single chip evenly between the multiple outputs and thus the overall performance decreases. A second option is to include multiple chips on a single board. While this delivers the same performance to each monitor as having a single card for each monitor would, it is a binding purchase since there is no upgrade path, meaning that, if somewhere down the line you decide you need to add a third monitor to your setup, your current dual output board won't cut it. The third and final option is to provide support for PCI versions of your card and, through drivers, make sure that the performance remains constant between all displays.

This final option was the route taken by 3Dlabs as they introduced the long awaited PCI version of their Oxygen GVX1 which allows support for a number of flexible multi-head output configurations of up to 8 boards. How does this relate to the issue of increasingly powerful consumer level cards that we discussed before? With the advent of NVIDIA's GeForce 256, the professional graphics industry has been shaken by a sub $300 card that is capable of delivering floating point performance of approximately 50 billion operations per second. This is in contrast to the 3 billion floating point operations per second the 3Dlabs Oxygen GVX1 is capable of. However, what we're about to find out is that numbers don't mean everything in the professional graphics market and in spite of the GeForce's arrival, the GVX1 offers itself as a cost effective solution that delivers the performance and quality that we would expect from any card that is priced in the $1000 range.



Two parts to this Equation

The Oxygen GVX1 can be considered to be a step above the Oxygen VX1 (which closely resembles the Permedia 3 Create! that we reviewed a while back), with a major difference being that the board itself features not only the GLINT R3 Rasterization Processor but also an on-board GLINT Gamma G1 Geometry Processor.

The base of the GVX1 can be considered to be the GLINT R3 rasterization processor that is actually the same Rasterization unit used on the Oxygen VX1. The GLINT R3 is fully OpenGL 1.2 compliant in silicon and thus offers full compatibility with newer OpenGL 1.2 applications as well as older 1.1 applications. Because the GVX1 is a professional level graphics accelerator, it isn't surprising to see support for 2048 x 2048 textures and 32-bit color/Z-buffer support implemented in the GLINT R3 rasterizer.

The R3's 230Mtexels/sec bilinear fill rate is indicative of a 115MHz core clock speed of the R3. The trilinear fill rate of the R3 is at half of the above number at 115Mpixels/sec. Both of these theoretical values put the fill rate performance of the GVX1 around the level of NVIDIA's TNT2 from the gaming world.

The R3 provides the GVX1 with the 300MHz RAMDAC that it requires to drive its high quality 2D and 3D graphics output as well as the 128-bit memory interface of the card.

On the other end of the 128-bit memory bus are the 16 - 2MB Samsung SDRAM chips that make up the 32MB frame buffer. The Samsung chips are rated at 8ns, making the default memory clock of the GVX1 no greater than 125MHz. This puts the amount of memory bandwidth available to the GVX1 at approximately 2GB/s, which is lower than what we are normally used to with gaming cards.

Keep in mind that the focus of a professional graphics board is much different than the focus of a gaming board, thus the available memory bandwidth carries a different weight in factoring performance. Most current games rely on taking a few relatively simple scenes, constructed of in upwards of 10,000 - 20,000 polygons and applying multiple textures to them. Because of this, a high polygon throughput is not the most important factor to accelerating today's games. Instead, having a high fill rate is almost directly proportional to performance in today's games. While this will change in the future with polygon counts increasing, for now, as 3dfx has been preaching in the past, fill rate is king.

The professional 3D arena is almost the complete opposite. The applications being used in this environment are intended to create complex structures and objects, modify them in real time and require the utmost attention to the quality of the picture on the screen. While a high fill rate is desired, especially when dealing with manipulating and rendering textured objects, more important to professional 3D applications is having a high polygon throughput. This is where the second part of the GVX1's equation comes into play -- the Gamma G1 Geometry Processor.



GLINT Gamma G1 Geometry Processor

If you ask a gamer about the origins of the term Hardware T&L (Transforming & Lighting), they would most likely spit out the name NVIDIA because of the support for it in the GeForce 256. Professional users will provide a different answer as they are quite aware of the fact that Hardware T&L has been around for quite some time in workstation level graphics accelerators, including the GVX1.

The GVX1 does not take NVIDIA's more cost-effective route of integrating hardware T&L support into the Rasterization processor; rather, they provide it using a chip external to the rasterizer. In the case of the GVX1, this chip is the GLINT Gamma G1. The G1 is capable of processing 4.75 million lit, transformed triangles per second courtesy of its 3 GFLOPs floating point performance. The G1 pales in comparison to the only other graphics accelerator we've had the liberty of testing with hardware T&L, the GeForce, in terms of its polygon rate.

While the 3GFLOP G1 can process 4.75 million triangles per second, the 50GFLOP GPU of the GeForce 256 is capable of up to 15 million triangles per second, with current rates falling around the 10 million lit, transformed triangles per second. This factor is important when considering the performance differences between the GeForce and the GVX1. Also, the GeForce manages to deliver this level of raw performance at around 1/3 the price of the GVX1. So why doesn't everyone just go out and buy a GeForce for their professional applications? We're getting there, but first let's take a look at another difference between the Gamma G1 and the GeForce's GPU.

While the GeForce's hardware T&L engine supports up to 8 simultaneous light sources, the G1 supports twice that at 16 simultaneous light sources. The reasons for this discrepancy is because of the nature of the two different solutions, the support for 8 more simultaneous light sources is more of a factor when considering a professional level graphics accelerator then when looking at a gamer's card.

The GVX1's drivers currently support full hardware transforming and lighting in both the OpenGL and DirectX 7.0 APIs. While support for the latter is not of much importance with a professional level card (as the card is aimed at more of an NT user base) it is good to know that 3Dlabs kept all ends covered with the driver support for the on-board G1 processor.



Virtual Texturing Management

The release of the Savage3D added a new buzzword to the vocabulary of the online gaming industry, that phrase was "texture compression." The basic idea behind S3's texture compression algorithm was to allow the user to enjoy the benefits of enormous amounts of textures, without having to deal with a huge drop in performance.

3Dlabs attempted to employ a technique known as Virtual Texturing Management, with a similar goal in mind. The idea behind the GLINT R3's Virtual Texture Management is quite simple, instead of having frame rates that feature extreme highs and extreme lows (i.e. ranging between 5 and 55 fps) Virtual Texture Management makes sure that your frame rates remain at a more constant level (i.e. 30 fps).

Acting as sort of an L2 cache, the GVX1's 32MB on-board memory works in cooperation with up to 256MB of your local memory which is used to store textures. When a request for those textures is made, the Virtual Texture Engine takes the textures it needs to display and transfers them from the local system memory to the graphics frame buffer. For gamers, S3's Texture Compression is a much more viable option than 3Dlabs' Virtual Texture Engine, however if you keep in mind that the GVX1 isn't a hard core gamer's card, the Virtual Texture Management support of the GVX1 begins to make much more sense.

When rendering any complex scene, especially when dealing with a large amount of textures in the scene, it makes more sense to be able to manipulate the scene at a reasonable speed rather than have extreme highs and lows in the frame rate. With any sort of texture compression, there is the possibility for a loss of visual quality, a sacrifice which simply cannot be made when dealing with professional applications. In a game, whether a wall's texture appears a certain way isn't of the utmost importance, however when designing an object or a scene in a professional application, the importance of maintaining visual quality does rise. Would you want the designer of your house looking at a rendering of the house with incorrectly displayed textures?

This theme of visual quality over performance is one that is made very clearly among graphics professionals and you will see it appear over and over again as we take a look at the performance of the GVX1 and its competitors.

The Virtual Texture Management engine is, by default, taken advantage of by all OpenGL 1.2 compliant applications. Direct3D applications may take advantage of the specification as well, however only if proper support is provided for in the coding of the application or game itself. With most professionals claiming loyalty to the OpenGL API, the Virtual Texture Management of the GVX1 will probably become a key part of your system's performance without you even realizing it.



Multi-head Options

With the introduction of the PCI GVX1, 3Dlabs enabled the Oxygen GVX1 to become one of the most flexible graphics cards in its class. At the beginning of this article we discussed three methods to allow for multiple output display from a professional workstation graphics subsystem.

The chipset itself supports multi-head configurations of up to 8 displays using 8 separate boards. The beauty of this configuration is that each of the up to 8 displays receives the same performance power, provided that all of the cards are identical, and when moving from one rendering display to the next there is no drop in performance.

Courtesy of 3Dlabs' excellent drivers, a multi-head configuration can accelerate a 3D application spread across one or more displays without any performance loss due to the multi-head configuration. An application window can be open across two different displays and the rendering or animation will continue uninterrupted at full speed.

The theme of flexibility comes into play when you consider the number of options you have for configuring your 8-way multi-head display configuration. You are given the option of using an AGP Oxygen GVX1 and multiple Oxygen VX1 PCI boards (the VX1 is like the GVX1 only without the Gamma G1 geometry processor) for a cost effective multi-head display where your secondary displays don't need the same acceleration power as your primary display. You can also setup a single AGP Oxygen GVX1 with multiple PCI Oxygen VX1 and GVX1 cards depending on what your specific needs are. And of course you can go with the most powerful setup, a single AGP Oxygen GVX1 and multiple PCI GVX1 boards.

Once again, 3Dlabs' excellent drivers make this possible, without causing any problems in your current crop of professional 3D rendering applications and without any decrease in performance.

2D Output

The Oxygen GVX1 provided a crisp and clear 2D image, courtesy of the high quality filters placed between the RAMDAC and the VGA output as well as the 300MHz integrated RAMDAC. Capable of driving resolutions of up to 2048 x 1536, the GVX1 also supports, through its GLINT R3 rasterization processor, an MDR-20 digital flat panel output at up to 1280 x 1024. We were a bit disappointed that 3Dlabs is still using the MDR-20 DFP interface rather than a more flexible DVI port that would enable higher resolutions.

The 2D performance of the GVX1 is identical to that of the Oxygen VX1 simply due to the fact that both boards use the same GLINT R3 rasterization processor.



Superior Drivers

3Dlabs has always been able to deliver very solid and stable drivers with their products, and the Oxygen GVX1 is no exception. Like the VX1, the Oxygen GVX1 makes use of a technology with their drivers that 3Dlabs likes to call PowerThreads SSE.

The PowerThreads SSE OpenGL drivers that ship with the GVX1 help performance under multiprocessor configurations. While the Gamma G1 geometry processor is capable of 3 billion floating point operations per second, noticeably more than a Pentium III 500, when placed in a system with dual or even quad CPU configurations, there may be a need to balance the geometry and lighting load between the geometry processor and the host CPUs, and this situation is where the multithreaded PowerThreads SSE drivers come into play.

The addition of the SSE onto the PowerThreads name is a result of SSE optimizations in the drivers themselves.

The driver utilities themselves are quite useful as they provide you with options to control the state of V-Sync (enabled/disabled) as well as turn on or off optimizations for specific applications. The resulting performance boost after optimizations for a particular application are turned on can be noticeable, depending on the usage patterns.

3Dlabs already has beta Windows 2000 drivers available for the Oxygen GVX1 and they are committed to providing the GVX1 with full support under Windows 2000 upon its release.

The Test

All tests were conducted at 1024 x 768 x 32-bit color at a 75Hz refresh rate. The latest drivers for each video card were used. Windows NT had Service Pack 6 installed.

The test system was a Pentium III 600, 512MB SDRAM, 22GB IBM Ultra ATA 66 HDD, & a Linksys LNE10/100TX Ethernet Card



SPECviewperf

Measuring performance in the professional environment is quite a difficult task. There are numerous possibilities for the manner in which the particular graphics card will be used, and there is no one benchmark that can tell you how well a card will perform under all applications.

The Standard Performance Evaluation Corporation, commonly known as SPEC, managed to come up with a synthetic benchmark with real world implications. By running specific "viewsets" SPECviewperf can simulate performance under various applications. To be more accurate, according to SPEC, "A viewset is a group of individual runs of SPECviewperf that attempt to characterize the graphics rendering portion of an ISV's application." While this method is by no means capable of identifying the performance of a card in all situations, it does help to indicate the strengths and weaknesses of a particular setup.

SPECviewperf 6.1.1 currently features five viewsets: the Advanced Visualizer, the DesignReview, the Data Explorer, the Lightscape and the ProCDRS-02 viewset. Before each benchmark set we've provided SPEC's own description of that particular viewset so you can better understand what that particular viewset is measuring, performance-wise.

Each viewset is divided into a number of tests, ranging from 4 to 10 in quantity. These tests each stress a different performance element in the particular application that viewset is attempting to simulate. Since all applications focus on some features more than others, each one of these tests is weighted meaning that each test affects the final score differently, some more than others.

All results are reported in frames per second, so the higher the value, the better the performance is. The last result given for each of the viewsets is the WGM or Weighted Geometric Mean. This value is, as the name implies, the Weighted Geometric Mean of all of the test scores. The formula used to calculate the WGM is as follows:

With n being the number of tests in a viewset and w being the weight of each test expressed as a number between 0.0 and 1.0.

If you'd like to know more about why a Weighted Geometric Mean is used, SPEC has an excellent article detailing just why, here.



Advanced Visualizer (AWadvs-03) Viewset

Taken from http://www.spec.org/gpc/opc.static/awadvs.htm

Advanced Visualizer from Alias/Wavefront is an integrated workstation-based 3D animation system that offers a comprehensive set of tools for 3D modeling, animation, rendering, image composition, and video output. All operations within Advanced Visualizer are performed in immediate mode with double buffered windows. There are four basic modes of operation within Advanced Visualizer:

     
  • 55% material shading (textured, z-buffered, backface-culled, 2 local lights)
    • 95% perspective, 80% trilinear mipmapped, modulated (41.8%)
    • 95% perspective, 20% nearest, modulated (10.45%)
    • 5% ortho, 80% trilinear mipmapped, modulated (2.2%)
    • 5% ortho, 20% nearest, modulated (.55%)
  • 30% wireframe (no z-buffering, no lighting)
    • 95% perspective (28.5%)
    • 5% ortho (1.5%)
  • 10% smooth shading (z-buffered, backface-culled, 2 local lights)
    • 95% perspective (9.5%)
    • 5% ortho (.5%)
  • 5% flat shading (z-buffered, backface-culled, 2 local lights)
    • 95% perspective (4.75%)
    • 5% ortho (.25%)
These are the 10 tests specified by the viewset that represent the most common operations performed by Advanced Visualizer:

Test  Weight  Advanced Visualizer functionality represented 
41.8%  Material shading of polygonal animation model with highest interactive image fidelity and perspective projection. 
28.5%  Wireframe rendering of polygonal animation model with perspective projection. 
10.45%  Material shading of polygonal animation model with lowest interactive image fidelity and perspective projection. 
9.5%  Smooth shading of polygonal animation model with perspective projection. 
4.75%  Flat shading of polygonal animation model with perspective projection. 
2.2%  Material shading of polygonal animation model with highest interactive image fidelity and orthogonal projection. 
1.5%  Wireframe rendering of polygonal animation model with orthogonal projection. 
.55%  Material shading of polygonal animation model with lowest interactive image fidelity and orthogonal projection. 
.5%  Smooth shading of polygonal animation model with orthogonal projection. 
10  .25%  Flat shading of polygonal animation model with orthogonal projection. 

The GeForce simply dominates here. The reason being because of the nature of this viewset, it is heavily dependent on having a high fill rate (480Mpixels/sec for the GeForce) and also benefits quite nicely from NVIDIA's superior hardware T&L capabilities. What you'll find is that there are certain applications that truly allow the power of the $300 GeForce's GPU to stand out, but in others it fails without a doubt.

The Oxygen GVX1 performs respectably here, in some cases it falls behind the old VX1 simply due to benchmarking variations and a lack of driver optimizations. It should also be noted that the 3Dlabs Application Configuration Control Panel was set to default OpenGL during this test, some of the other settings could provide for greater performance but realistically, the GeForce isn't going to be touched in this particular test.

The Weighted Geometic Mean says it all, the GeForce's powerful GPU and high fill rate do give it a very large advantage here at a very affordable cost. The GVX1 simply can't compete with this gamer's card, even in its own professional environment. But let's move on to the next viewset to see how things change...



DesignReview (DRV-06) Viewset

Taken from http://www.spec.org/gpc/opc.static/drv.htm

DesignReview is a 3D computer model review package specifically tailored for plant design models consisting of piping, equipment and structural elements such as I-beams, HVAC ducting, and electrical raceways. It allows flexible viewing and manipulation of the model for helping the design team visually track progress, identify interferences, locate components, and facilitate project approvals by presenting clear presentations that technical and non-technical audiences can understand. There are 6 tests specified by the viewset that represent the most common operations performed by DesignReview. These tests are as follows:

Test  Weight  DRV functionality represented 
45%  Walkthrough rendering of curved surfaces. Each curved object (i.e., pipe, elbow) is rendered as a triangle mesh, depth-buffered, smooth-shaded, with one light and a different color per primitive. 
30%  Walkthrough rendering of flat surfaces. This is treated as a different test than #1 because normals are sent per facet and a flat shade model is used. 
8%  For more realism, objects in the model can be textured. This test textures the curved model with linear blending and mipmaps. 
5%  Texturing applied to the flat model. 
4%  As an additional way to help visual identification and location of objects, the model may have "screen door" transparency applied. This requires the addition of polygon stippling to test #2 above. 
4%  To easily spot rendered objects within a complex model, the objects to be identified are rendered as solid and the rest of the view is rendered as a wireframe (line strips). The line strips are depth-buffered, flat-shaded and unlit. Colors are sent per primitive. 
4%  Two other views are present on the screen to help the user select a model orientation. These views display the position and orientation of the viewer. A wireframe, orthographic projection of the model is used. Depth buffering is not used, so multithreading cannot be used; this preserves draw order. 

The Oxygen GVX1 is performing quite nicely here and the lead the GeForce holds is much less than the massacre we saw in the last test. While the more expensive GVX1 does come in second place in all of the tests, it is close on the heels of the GeForce overall and keeping in mind that the GVX1 was out months before the GeForce was even talked about, the performance isn't terrible.

But why not go out and buy a GeForce today for all of your professional rendering applications? There is more to this equation than just performance, but we'll get to that in a bit, let's finish up these numbers first.



Data Explorer (DX-05) Viewset

Taken from: http://www.spec.org/gpc/opc.static/dx.htm

The IBM Visualization Data Explorer (DX) is a general-purpose software package for scientific data visualization and analysis. It employs a data-flow driven client-server execution model and is currently available on Unix workstations from Silicon Graphics, IBM, Sun, Hewlett-Packard and Digital Equipment. The OpenGL port of Data Explorer was completed with the recent release of DX 2.1.

The tests visualize a set of particle traces through a vector flow field. The width of each tube represents the magnitude of the velocity vector at that location. Data such as this might result from simulations of fluid flow through a constriction. The object represented contains about 1,000 triangle meshes containing approximately 100 verticies each. This is a medium-sized data set for DX.

Test  Weight  DX functionality represented 
40%  TMESH's immediate mode. 
20%  LINE's immediate mode. 
10%  TMESH's display listed. 
8%  POINT's immediate mode. 
5%  LINE's display listed. 
5%  TMESH's list with facet normals. 
5%  TMESH's with polygon stippling. 
2.5%  TMESH's with two sided lighting. 
2.5%  TMESH's clipped. 
10  2%  POINT's direct rendering display listed. 

Performance-wise, the GeForce tramples over the GVX1 once again but both cards offer respectable performance. The GeForce's more powerful T&L engine gives it the performance benefit here. Driver issues seemed to hold the GVX1 back in some of the lower tests that didn't count as much as the first two since there is no reason that the VX1 should be outperforming it in any case.

Once again, a very similar situation. The GeForce is on top with a respectable 3Dlabs following.



Lightscape (Light-03) Viewset

Taken from: http://www.spec.org/gpc/opc.static/light.htm

The Lightscape Visualization System from Discreet Logic represents a new generation of computer graphics technology that combines proprietary radiosity algorithms with a physically based lighting interface.

There are four tests specified by the viewset that represent the most common operations performed by the Lightscape Visualization System:

Test Weight DRV functionality represented 
1 25% Walkthrough wireframe rendering of "Cornell Box" model using line loops with colors supplied per vertex. 
2 25% Full-screen walkthrough solid rendering of "Cornell Box" model using smooth-shaded z-buffered quads with colors supplied per vertex. 
3 25% Walkthrough wireframe rendering of 750K-quad Parliament Building model using line loops with colors supplied per vertex. 
4 25% Full-screen walkthrough solid rendering of 750K-quad Parliament Building model using smooth-shaded z-buffered quads with colors supplied per vertex. 

The performance here is low for all of the contenders, the main issues that need to be stressed are those relating to visual quality. But since this is the performance section of our evaluation we will only concentrate on performance, which once again falls into the lap of the GeForce.



ProCDRS-02 Viewset

Taken from: http://www.spec.org/gpc/opc.static/procdrs.htm

The ProCDRS-02 viewset is a complete update of the CDRS-03 viewset. It is intended to model the graphics performance of Parametric Technology Corporation's CDRS industrial design software.

For more information on CDRS, see http://www.ptc.com/icem/products/cdrs/cdrs.htm

The viewset consists of ten tests, each of which represents a different mode of operation within CDRS. Two of the tests use a wireframe model, and the other tests use a shaded model. Each test returns a result in frames per second, and a composite score is calculated as a weighted geometric mean of the individual test results. The tests are weighted to represent the typical proportion of time a user would spend in each mode.

All tests run in display list mode. The wireframe tests use anti-aliased lines, since these are the default in CDRS. The shaded tests use one infinite light and two-sided lighting. The texture is a 512 by 512 pixel 24-bit color image. See the script files for more information.

Test  Weight  Description 
25  Wireframe test 
25  Wireframe test, walkthrough 
10  Shaded test 
10  Shaded test, walkthrough 
Shaded with texture 
Shaded with texture, walkthrough 
Shaded with texture, eye linear texgen (dynamic reflections) 
Shaded with texture, eye linear texgen, walkthrough 
Shaded with color per vertex 
10  Shaded with color per vertex, walkthrough 

Here is where the GeForce runs into problems and where 3Dlabs' 15 years of driver experience truly begins to shine. For whatever reason, the GeForce does not seem to like the first two tests, both of which are wireframe tests. But we've run wireframe tests before, why all of the sudden does the GeForce start performing below par? What's different in this viewset is that the two wireframe tests use Anti Aliased lines which the GeForce seems to have trouble doing for some reason.

The cause of this could be software related, but the TNT2 was using the same driver set as the GeForce and no AA problem emerged in those tests at all. We are leaning towards an issue with the GeForce's hardware and the current driver release, which is possibly why we have yet to see a fix for the problem. Now keep in mind that the GeForce is still a gamer's card, and NVIDIA does have a professional version of the GeForce known as the Quadro. The Quadro features a slightly higher clock speed than the GeForce and drivers that are tailored to professional applications, which should make competitiors in the workstation graphics arena worried. Initial tests indicate that the Quadro does not have this AA problem, but at the same time keep in mind that the Quadro will easily be priced at the same level as the GVX1 (~$600).

The GVX1 performs quite nicely here, and gains the lead over the competition courtesy of its speedy wireframe AA rendering as well as its relatively strong performance throughout the rest of the tests. With the two wireframe AA tests accounting for half of the final weighted geometric mean, it is not a surprise that the GVX1 comes out on top.



Indy3D

The Indy3D benchmark is split up into three types of tests: application, image quality, and primitive (synthetic) tests. For the purposes of illustrating performance we have only used the application tests in this section.

The application tests are split into 3 sections, MCAD, Animation, and Simulation. Their respective descriptions from the SENSE8 Corporation Website can be found below:

MCAD Benchmark

The MCAD benchmark test consists of two different tests (MCAD40 and MCAD150) designed to simulate the rendering of typical models of medium to high complexity (40,000 or 150,000 polygons). The MCAD150 test has enough polygons that it tends to give results highly dependent on the CPU or geometry transformation hardware and little else.

The MCAD visual database is an engine model supplied by Engineering Animation Incorporated (EAI). The engine was created with SDRC's IDEAS Master Series and was converted into a VRML 1.0 file using EAI’s VisMockup application.

Animation Benchmark

The Animation model is a human figure supplied by the Westwood Studios. We feel this type of character-based modeling is typical of the game and video animation markets. The cityscape around the figure was created by Sense8 to put the figure in a typical setting. The file supplied by Westwood Studios was in 3D Max format and was then processed by Sense8 to eliminate redundant polygons and stored in a compact Sense8 format.

Simulation Benchmark

We have selected a realistic sailing simulation built by Sense8. The sailboat is driven forwards by wind forces acting on the boat and by resistance of the boat to the water. The physics involved in this simulation are fairly simple and we have verified that on the reference system, the impact of running the physics simulation is not noticeable with any of the graphics boards we tested at the official settings. However, it is possible that for smaller windows or boards with extremely large texture fill capabilities (anything that can deliver 25-35 fps for the simulation), the impact of the physics model will be felt if the CPU can't keep up with the rendering and becomes the bottleneck. The justification for this is that in the simulation market, there is almost always some CPU utilization for running code other than 3D transformations/lighting.

In addition to the physics simulation, we are creating "waves" by moving the vertices of a mesh of polygons near the boat. We have verified that this makes a negligible contribution to performance under most expected environments.

Judging by the nature of the benchmarks, we'd once again expect to see the GeForce come out on top. And indeed it does with the GVX1 as the only remotely close competitor in the MCAD benchmarks. But by this time we've already proven the basic performance of all of the cards being compared here.



3D Studio MAX R2.5

The two Kinetix supplied 3D Studio MAX benchmarks illustrate an obvious performance difference between the cards, but for the most part 3D Studio MAX is dependent on raw CPU power rather than the power of the video card. Although the top two performers do change this somewhat with their hardware T&L support.



Image Accuracy & Quality

Throughout this entire review the cheaper GeForce has been more or less trampling the GVX1 in the performance tests, yet we haven't praised NVIDIA's creation for being a true savior to the workstation graphics community. The fact of the matter is that when you're dealing in high end professional 3D rendering applications design integrity and thus image accuracy/quality are much more important than performance.

If the design you're creating is improperly displayed the ramifications of sending that design to production without constantly checking over your work are horrendous. 3Dlabs has taken the time to make sure that their OpenGL implementation is solid and that there are no such flaws in their drivers that would result in inaccuracies in the actual display of rendered images.

The exact opposite is true for games. If one or two pixels are out of alignment, or if a texture appears incorrectly, then the consequences are minimal. The game may not look "perfect" to a well trained eye, but other than that, no one gets hurt. It is this reasoning that supports the fact that the GeForce, in spite of its more than stellar performance, cannot compete with the GVX1 on the level of image accuracy and quality, two of the most important facets of any professional 3D design.

With the ProCDRS-02 viewset under SPECviewperf, part of the outputted results are percentages of pixels that are different from the original image. These percentages help to calculate how bad the inaccuracies of the rendering are. In our tests, the GeForce came out with around a 1% error for each test while our GVX1 test card averaged very close to 0.0%. While 1% may not seem like a lot, 1% in the design of a complex building or mechanical part can be the difference between a successful one and a failing one.

3Dlabs actually provides some very nice examples of these inaccuracies provided by the GeForce, as well as other cards that attempt to compete with the GVX1 in the high end arena. The quality comparison page is located here, but for your convenience here are the shots comparing the GeForce to the GVX1 directly. In our tests we noticed very similar results:

GeForce 256
Oxygen GVX1
Note: Missing polygons destroy model integrity
Note: Missing pixels dropped due to rendering inaccuracies
Note: Missing pixels between line segments fail to meet the Viewperf standards of quality

 

This is a very big problem, and regardless of what performance a card may offer, in a professional environment, issues like these cannot be ignored.



Conclusion

The name 3Dlabs has always been associated with quality, and that tradition carries on with the Oxygen GVX1 product. With the recent introduction of the PCI version of the card the possibilities for multi-head operation make the GVX1 a very flexible solution that should have no problem earning back its $600 pricetag in no time. The high quality drivers, above average performance, and overall stability/reliability of the card will keep it proudly bearing the 3Dlabs name.

Unfortunately there is a threat to 3Dlabs and the GVX1, it is in the form of NVIDIA's GeForce. We have already made it clear that the GeForce obviously has some issues with wireframe AA rendering as well as dropping pixels in various rendering situations. A driver fix has yet to appear and we most likely won't see on for the GeForce, if one is even possible. NVIDIA doesn't want the GeForce competing on this level, instead they have another child to bring into the professional market, the Quadro.

The Quadro supposedly isn't plagued by the accuracy and quality problems of the GeForce, but the chip itself isn't very different from the GeForce at all. The main difference seems to be the drivers. NVIDIA isn't letting just anyone produce Quadro based cards either, only ELSA, a company that has a long history of specializing in workstation level products and drivers. So don't be too surprised if NVIDIA's Quadro comes out to be a very good competitor in this market, provided that the drivers are sound and capable of meeting the demands of professional users.

Until the day when the Quadro comes in as the technology to beat, we will continue to turn to companies like 3Dlabs to offer us with sub-$1000 solutions like the Oxygen GVX1.

Log in

Don't have an account? Sign up now