![](/Content/images/logo2.png)
Original Link: https://www.anandtech.com/show/466
Please
introduce yourself and let us know what you do at Bitboys.
My name is Shane Long and I am the President and CEO of Bitboys Inc. and Oy.
How did Bitboys form as a company?
The company was actually formed by a group of individuals who had a great deal of experience in visual simulation and 3D graphics design. They basically wrote some very impressive a very fast software engines before the mass availability of good 3D accelerators.
What experience do your engineers have in the 3D market?
Most have backgrounds relating to 3D hardware and/or software algorithms
Tell us a little about the Glaze3D and what makes it so special.
We have designed a completely new approach to 3D acceleration, Xtreme Bandwidth Architecture. XBA technology was invented during the design of our current 3D-graphics architecture. XBA consists of the eight-texel/four-pixel rendering pipeline, extremely wide 512-bit memory bus and our memory management logic. The memory management logic works as a highway system tying together the embedded DRAM memory, the external SDRAM memory, AGP memory and all units, which want access to the memory. A lot of the XBA inventions are behind this memory system, which handles 768 bits of data (eDRAM+AGP+SDRAM) every clock cycle, resulting in the massive bandwidth of 12.5 GB/s.
This huge bandwidth enables us to do everything in true color; we are not really interested in any 16-bit performance and dithered images. The bandwidth also allows us to do full-scene Anti-Aliasing with real supersampling. Although rendering game graphics at 1024x768 true-color resolutions requires around 2.5 GB/s of memory bandwidth, doing this with anti-alias enabled requires 10 GB/s of memory bandwidth! To say this in another way, products equipped with only external SDR or DDR memory are not able to do full-scene anti-aliasing at realistic speeds. The XBA architecture based products are really the first chips capable of improving the image quality with Anti-Aliasing
When can we finally expect Glaze3D products to ship? We've seen the "1st half of 2000" statement on your website, but can you be any more specific?
We will be demonstrating the first XBA enabled product in 2Q and reach full volume production in the 3Q.
The Glaze3D originally sounded great on paper when it was announced long ago, but by the time it's actually available, 3dfx should have their Voodoo 4 & 5 products, while NVIDIA will be looking to NV11 & NV15 to keep them going. How do you see the Glaze3D stacking up in such a competitive market?
We feel that the part will be very competitive and with extraordinary performance due to the only 8-texel engine and the memory bandwidth to actually achieve huge filtrates. In terms of texel fill rate, 3dfx will need the V5 6000 four-chip solution to even stay close and Nvidia will still only be half of the Glaze3D 1200 product. The Glaze3D 2400 being double again that performance level at over two gigatexels will clearly be the market leader in fill rate. We do believe that our architecture is superior.
Do you actually have any Glaze3D silicon at this point in time? Or are you still working off simulators?
The simulators are running very well, but as of yet we do not have working silicon in house.
Who will be producing boards based on the Glaze3D?
We are in discussions with several board manufacturers to market our products. Actually we will be using a fairly clean model of distribution by focusing initially on a limited number of partners and really enabling them to succeed in the market.
What do you expect the typical memory configuration to be?
We believe that you will see three levels of XBA enabled board level products for the various markets: 41MB (9MB edram, 32 MB sdram) 50 MB (18MB edram, 32 MB sdram) Dual processor 82 MB (18MB edram, 64 MB sdram) Dual processor
In what price range do you expect these products?
I do not want to speculate on our partners pricing models, but I know beyond a shadow of a doubt that our pricing will be more than in line with expectations form the both the basic gamer and enthusiasts alike.
The
transistor count for the Glaze3D core is listed as 1.5 million, considerably
less than the competition, yet you claim to support almost every imaginable
feature. How have you been able to accomplish such efficiency?
At this time we are not going into detail on the XBA architecture.
What's the transistor count with the embedded DRAM included?
Not releasing at this point.
The core of the Glaze3D is listed as 150MHz, despite the 0.20micron process that it will be manufactured on. The transistor count also seems quite low, suggesting relatively low heat output. What is holding back the clock speed?
The speeds of our architecture will increase over time and products generations. The 150 MHz is in line with the other chips coming out this year, considering that we don't really need much more clock frequency as we have an eight-texel pipeline instead of a two-texel pipeline like the upcoming Voodoo5 or the four-texel engine of the Geforce products. Running an eight-texel pipeline at 150 MHz is like running a two-texel pipeline like Voodoo5 at 600 MHz!
What sort of performance increase can we expect solely from the addition of embedded DRAM?
It will enable the part to hit the actual peak fill rates quoted. Other architecture we know will be memory bandwidth limited in higher resolutions, color depths and scene complexities, the fact is they will not be able to achieve their quoted fillrate numbers because of their memory bandwidth ceilings.
What performance hit occurs when external DRAM is accessed?
For the Glaze3D 1200 we can support 1024x768 at true color in the edram. Higher true color resolutions will have the color buffer in the external SDRAM memory. The required color buffer bandwidth is about half of the Z buffer bandwidth, so putting it to the SDRAM doesn't really hurt us much.
In fact, in some resolutions and dependent on the texture scheme we could see better speeds with this combined solution, as then we can combine the bandwidths of both memory types.
The Glaze3D 2400 will support 1280x1024 in true color inside the internal frame buffer, and use a similar structure as mentioned above in the 1600x1200 resolutions.
Will all Glaze3D chips include the 9MB of embedded DRAM, or will there be a model that runs purely off external DRAM?
All Bitboys products will be XBA enabled. The type of 3D performance we are talking about cannot be achieved without it.
The 9MB figure is a bit odd - usually we see memory in 2, 4, 8, 16, 32MB, etc. chunks. How was this figure arrived at?
It accommodates 1024x768x32bit color completely inside the edram in a single chip configurations, and 1280x1024x32 bit color in a dual chip configuration.
Which memory types are supported for external DRAM?
SDRAM and SGRAM
How do you decide which information is critical to be stored in the embedded DRAM?
It is dependent on many factors including resolution, color depth, Z depth, texture size, etc. We make the best determination of where to locate the information based upon needed bandwidths.
What kind of frame rates can we expect from a Glaze3D 1200 in Quake 3 at 1024x768x32 and 1600x1200x32? How about the 2400?
It is hard to speculate, but I do not think we will disappoint with the 1200 and the 2400 is going to be amazing!
Will the Glaze3D be a reasonable professional 3D graphics solution? The XBA architecture does lend itself very well to the workstation market, however at this time we will be focusing on the entertainment segment of the 3D graphics industry.
How do the 2 Glaze3D chips interact in the 2400 model? Is it similar to 3dfx's SLI, ATI's AFR, Metabyte's PGP, or is it something completely different?
It is a proprietary scheme that is unlike any other currently offered solution.
The specifications mention that 4 Glaze3D chips can work together, but no such product is listed on your website. Are there any plans for one?
At this time our full efforts are on the 1200 an 2400.
Both
the GeForce 256 and Savage 2000 are offering T&L support. Is this the next
big step in 3D rendering?
It is real simple, Triangle throughput and pixel fillrate with excellent quality are the two issues. We believe at this time that fillrate is more of an issue and so we have concentrated our silicon budget in this area.
Geometry acceleration will be important, but at present with the performance of the Intel and AMD CPU's really moving and the current polygon count that we are looking at over the next 18 months from the top game companies, it is not clear if spending your silicon budget on floating point is the best strategy.
Your online FAQ states:
"The dedicated floating point unit we have on the Glaze3D is capable of doing T&L, but we intend to leap into totally new geometry performance level with our forthcoming geometry processor."
So will the Glaze3D support T&L or not?
We will not have hardware geometry support in the first XBA enabled products.
Tell us a little more about this "forthcoming geometry processor." Is it a separate chip designed to work in conjunction with the Glaze3D, or an all new design with integrated T&L? If separate, will it be available as an upgrade to initial Glaze3D offerings?
We are discussing how to implement geometry into the XBA architecture based chips and not release the separate geometry processor. This will allow us to distribute the geometry load for multiple chips.
What do you think of 3dfx's T-Buffer? How does it compare to the accumulation buffer that the Glaze3D will offer?
All indications point to the fact that it is basically an accumulation buffer at its core.
What sort of performance hit do you anticipate with full scene anti-aliasing enabled?
This is where XBA is going to rock. With the amount of actual fillrate we have at our disposal we will be able to do full scene Anti-Aliasing with real supersampling. By having the memory bandwidth and the 8-texel engine, we will see AA actually used by the enthusiasts gamer!
Will your Environment bump mapping be compatible with Matrox's EMBM and all the games that support that implementation?
Actually Bitboys is the inventor of bump mapping and licensed the technology to Microsoft. Our part will defiantly support environmental bump mapping as it is one of the most important issues for visual quality in 2000.
Do you plan to support alternative OS's, such as Linux, BeOS, or even the MacOS?
We will be supporting Linux ourselves and I do anticipate working with 3rd parties on MacOS.
We appreciate you taking the time out to answer all of our questions. We look forward to seeing the Glaze3D in the AnandTech lab soon!