New NVIDIA Drivers Disable PhysX If A Non-NVIDIA GPU Is Detected

Every so often, someone on the 'Net stumbles across something noteworthy, but the news doesn't spread until weeks or maybe even months later. That's what's happened over at NGOHQ, where forum reader DarthCyclonis discovered that NVIDIA drivers released after the v185.85 WHQL package (i.e, 186.18 and higher) removed the ability to use a GeForce 8xxx, 9xxx, or 2xx card as a dedicated PhysX processor if an non-NVIDIA GPU is present.

When asked for the reason behind the change, an NVIDIA representative stated: "For a variety of reasons - some development expense some quality assurance and some business reasons NVIDIA will not support GPU accelerated PhysX with NVIDIA GPUs while GPU rendering is happening on non-NVIDIA GPUs." From a certain perspective, this definitely makes sense. NVIDIA has invested a lot of money and time into
PhysX and PhysX development; the company has good reason not to want to hand all of its hard work over to competitors. Locking out ATI today (and, we suppose, future Larrabee cards) is a logical business step. In taking it, however, NVIDIA is taking a significant risk. Customers—particularly those who consider themselves loyal customers—may very well see the forced incompatibility as a predatory move intended to force customers to use NVIDIA solutions.

The Tension Between PhysX Uptake and Corporate Profits
Ever since it bought Ageia in 2008, NVIDIA has been calling on developers to offload physics calculations to the GPU via the company's PhysX engine, which is freely available to anyone interested in developing for the Xbox 360, PS3, or PC (both Linux and Windows). While the PhysX engine has been a reasonably popular software physics solution, the number of games that actually support hardware-accelerated PhysX is still fairly small.

Historically, the speed of and degree to which software developers adopt any given video card technology correlates with the percentage of consumers able to take advantage of it and the difficulty of writing programs that utilize it. FSAA and AF became popular partly because these were features that could be activated or deactivated by the video card's driver and did not require the game developers to (typically) write additional code. Features that are only available from a single manufacturer tend to be limited to niche adoption or go unused.

If NVIDIA wants to push PhysX adoption, it should be encouraging gamers to purchase (or repurpose) an NVIDIA GPU for use as a dedicated PhysX card. Paradoxically, however, the success of the G80 and G92 series makes pushing PhysX as an add-on capability even more unappealing for the company. The data below is drawn from the hardware survey available from Valve's Steam on-line content distribution system.
 

Not only does NVIDIA hold the lion's share of the market, it dominates the percentage breakdown of DX10-capable GPUs in both XP and Vista. In Vista, the (mostly) G80-based 8800 series accounts for 5.11% of all cards sampled; NVIDIA holds nine of the top 10 spots, which account for 18.25% of the total. Under XP, the 8800 series has 15.28% of the market and 9 of the top 10 spots. In this case, the top ten accounts for 53.86% of the video cards in use on Steam. Numbers like these make it easier for NVIDIA to push PhysX support from the developer side, but there's a catch. If consumers start repurposing a significant percentage of all those NVIDIA cards as add-in PhysX boards, or buy low-end boards to do the same thing, NVIDIA's top-end sales could fall. This is particularly cogent given the newly released Radeon HD 5800 series' strong performance.

The Customer Conundrum
NVIDIA's position is understandable, but the company, in our view, could be making a mistake. In its desire to deny ATI seat at the table, NVIDIA is removing feature support from the group of people most likely to identify as loyal NVIDIA customers. From their collective perspective, their chosen GPU vendor could be removing product functionality because it's feeling the heat from ATI's new Radeon HD 5870 / 5850 parts. In all fairness, NVIDIA actually disabled cross-GPU support several months ago, but the fact that it's gone carries more weight than precisely when it disappeared.

For now, NVIDIA users who want to combine CUDA / PhysX and ATI can simply keep using the v185.85 driver. The company's desire to keep hardware-level PhysX as its own special sauce may give it a degree of unique value, but locking out card capabilities could sabotage feature adoption. If it re-enables cross GPU support, it's true, NVIDIA could lose some customers to ATI, but it'll lose some customers to ATI if it keeps the feature locked out anyway. Given the choice between good customer care, with higher hardware-level PhysX use, and bad customer care, with potentially lower hardware-level PhysX use, it makes the most sense to re-enable the feature.

Allowing cross-manufacturer support might be the only way CUDA survives. With OpenCL and DirectCompute both gathering their own support bases, it's an open question as to which language programmers will opt for. ATI and NVIDIA cards are both capable of running OpenCL code. Both manufacturers' cards can use DirectCompute.  CUDA? That's an NVIDIA thing. 

Until/unless NVIDIA can demonstrate that CUDA delivers performance gains or visual effects that the other two languages simply can't match, it's going to have an upward battle. There's no denying that NVIDIA was the first company to really push physics onto the GPU or tout the GPU as a massively parallel processor, but being first does not grant the company special status or a free percentage of market share. CUDA will have to compete with OpenCL and DirectCompute on its own merits—the tighter the restrictions NVIDIA puts on PhysX configurations, the more it could be tying one hand behind its back.