News Reviews Forum Contact About Links Advertise
    





HotHardware
Is Hosted By!



nVIDIA's "Cg" Language
An Interview With David Kirk, Chief Scientist - NVIDIA Corp
What's on the horizon for next generation graphics

By Dave Altavilla
6/14/02

   
 

 


The recent introduction of 10-bit color precision to consumer graphics cards, seems to have the promise of improved fidelity at the desktop and 3D level.  What are your thoughts on this?

I think that 10-bit color is a half-step to where we really want to be. Clearly, we would like to see more precision in the lighting and shading calculations, and texture and special effects blending. When many layers of transparent effects such as smoke, fire, etc, are blended together, sometimes we see banding.  Also, since in the current generation of GPUs, many effects are rendered using multi-pass, some precision may be lost in very bright or dark areas. For these purposes though, 10 bits doesn’t really help much.  Ideally, you would like to see some more bits at the high end, to allow 2x or 4x over-bright for flashes, explosions, etc, and at least 2 or 3 low order fractional bits, to avoid loss of precision in blending.  So, we’re up to at least 12-14 bits to be really useful, and I’d like to see at least 16.  Also, the next generation of GPUs probably won’t require multi-pass to render complex shading, so the 10 bit frame buffer isn’t worth much.  Finally, since most affordable flat panel LCD displays really only reliably can display 5-6 bits of different intensities, and CRTs can typically barely display 7 or 8 bits with any fidelity, supporting a display buffer with more than 8 bits is really wasted money.
 


How do you think Matrox 16X “Fragmentation AA” will compare to NVIDIA’s multi-sample AA?

I have not seen Matrox’s “Fragmentation AA” yet. I understand, from what I have read in the websites, that it is a fast but approximate anti-aliasing technique.  I’ll look forward to seeing it when the product is available.
 


Obviously, with R300 boards on display and running in systems, ATi is fairly close to being able to deliver what some are rumoring to be a GeForce4 killer.  Everyone knows how quickly NVIDIA can react from a design cycle standpoint.  Can you comment on the relative readiness of NV30?

Sorry, no! I can’t comment about future products. Regarding futures, I also think it’s important to differentiate between sightings of new products and the real thing, mass production and availability. GeForce4 Ti4600 continues to be the most advanced and powerful product that you can buy.


Do you think, upon its release, that NV30 will keep NVIDIA in the leadership position the company enjoys now?

What is this “NV30” that you keep talking about? 
It is my expectation that NVIDIA’s awesome technical team will continue to express their passion for excellence by producing exciting and powerful products that consumers love.
 


How big of an impact will DirectX 9 have on the look of computer games?

I think that DirectX 9 will have a tremendous effect on the visual richness of computer games. This will be especially true for games authored using a high level language such as Cg, since the learning curve will be so much faster.
 


How important to NVIDIA is driver development and the software engineering that goes into it?

NVIDIA’s software driver team is first-rate, and they have made some terrific contributions to our products. One of their greatest achievements is our unified driver architecture (UDA), which allows the same driver to run on any of our hardware products. This also allows a new chip to run with the old drivers, and that enables us to get new products to market very quickly. That is also why you frequently see subsequent Detonator software releases providing so much performance and feature increases.  Often, the new features in a chip are not supported yet by the driver at the time of the hardware introduction.
 


How important is fab process technology for next generation graphics?  Are gate lengths and die geometries the limiting factor?

Fab process technology gives us a free factor of two in “capability” – the number of transistors
* the switching speed – every year or so. In addition, graphics is a highly parallel problem to go solve. We could calculate all of the pixels in parallel, if we had enough transistors. This makes building fast graphics processors much easier than building CPUs, where everything must be done sequentially.  Also, there is a very deep and rich history of graphics research from SIGGRAPH and the rest of the graphics community.  We can use the new capabilities and transistors to implement these ideas. When we put all of these together, we are able to deliver approximately 2x performance every 6 months. Truly stunning.



This is a fairly open ended question… What would you most like to improve upon at NVIDIA?

I’d like to see more hours in the day. There are so many exciting products that I’d like to build, and it takes time to build each one with the passion and dedication necessary to make it great. We spend a lot of time thinking about where PCs and graphics are going, and have a lot of great visions. It’s just hard to wait for the future to arrive.  We’re working hard to make it get here sooner.

I envision cinematic quality rendering and real time graphics, with the same quality as animated movies.  It’s not going to be easy to get there but we’re working on it.



Last question and it may be a tough one. If NVIDIA could produce a NO compromise GPU, disregarding price, process, die size etc, what features and architecture would it have?

All of them! It would be huge, fast, free, and could make any picture, in real time.

I need to get back to work to try to get us there!
 




Thanks for your time and patients, bearing with our gauntlet of questions, David! Best of luck to you and the NVIDIA team. We’ll be eagerly anticipating your next move!


 

 

Well then, we've given you a taste of what NVIDIA has in store for the game developers, as well as what their Chief Scientist and "visionary", David Kirk, thinks about this new era of Programmable GPUs and the software that is driving them.  If you were paying attention here, David hints at what might be in store for us with the NV30.  Can we read into this that NVIDIA is targeting 16 bit color precision in 3D?  Did we also hear that perhaps the next generation of NVIDIA GPUs will be capable of rendering complex shading effects in a single pass?  Does this mean more Vertex Shaders will be incorporated in the NV30, than the 2 units that are on the GeForce4 Ti?  This is all probably a safe bet, that NVIDIA will be stepping things up to this level with the NV30.  However, until we have official statements that confirm or deny this, it is all just speculation.

For now, the Game Developer community has new powerful tools at their disposal, courtesy of NVIDIA.  In addition,  the road ahead for 3D Graphics is getting more impressive every day.

 

Questions?  Comments? 
Do you have something meaningful to say or do you just want to flap your gums?

Get into the HotHardware PC Hardware Forum and air it out!

 

 





This site is intended for informational and entertainment purposes only. The contents are the views and opinion of the author and/or his associates. All products and trademarks are the property of their respective owners. All content and graphical elements are Copyright © 2000-2004 David Altavilla. All rights reserved.