Mid-Range Workstation GPU Shootout : FireGL V5600 vs. QuadroFX 1700 vs. FireGL V3600

Article Index:   


The QuadroFX 1700 graphics card is the workstation equivalent of the GeForce 8600 for the gaming market. Both of the cards run on Nvidia’s G84 graphics processor and consume a small enough amount of power to not require any external power plugs. The card is small, quiet, and relatively unassuming. However, for the card’s diminutive size, it’s actually surprisingly powerful.



Nvidia QuadroFX 1700 512 MB



Nvidia QuadroFX 1700 512 MB

Nvidia’s G84 graphics processor, which is at the heart of the QuadroFX 1700, is manufactured on an 80nm manufacturing process. While it shares many similarities to Nvidia’s high-end G80 processor seen in their high-end workstation cards, the G84 is far smaller and less powerful. While the G80 processor has 128 unified shader processors, the G84 which we see here only has a quarter of that, 32 shader processors. This effectively cuts performance heavily across the board, but does make for a very small, efficient chip. The G84 has 289 million transistors stuffed into a core which is 169mm squared. Despite its size, the G84 supports DirectX 10, OpenGL 2.1, and even includes Nvidia’s VP2 hardware video decoding engine as well, something which even the high-end G80 processor doesn’t have.

The QuadroFX 1700 ships with its G84 processor clocked at 460 MHz with 800 MHz DDR2 memory, along with its shader clock set to 920 MHz. These numbers are quite a bit lower compared to the GeForce 8600 gaming card, which is based on the same GPU, and runs at 540 MHz GPU with 700 MHz memory and a 1.19 GHz shader clock speed. Of course, this is fairly typical with Quadro FX cards, running at lower clock speeds compared to their gaming brethren, so what we’re seeing here is not out of line for Nvidia.

With a tame clock speed and an advanced 80nm manufacturing process for its GPU, the QuadroFX 1700 doesn’t need elaborate cooling. Nvidia outfits this board with a very small aluminum alloy thin-fin cooler with a 4-pin PWM thermally controlled fan. While the fan is small, it doesn’t run at a high RPM, so you don’t get any high-pitched fan noise. It’s a very quiet card overall, even under heavy loads.

The board is equipped with 512 MB of DDR2 memory from Hynix. The memory modules are set to 400 MHz clock rate, and connect to the GPU via a 128-bit memory bus, allowing for 12.8 GB/s of memory bandwidth. Not that impressive for a $700 board. The memory modules are left un-cooled on the PCB, since they do not run warm enough to require any type of cooling.



Dual-Link DVI and HDTV Connectors



Single Slot Aluminum Alloy Cooling System

The FX1700 is equipped with two dual-link DVI output ports, which are capable of driving 2560 x 1600 30” displays per port. In addition, the board has an HDTV output port, in case you want to hook up this board to a component-enabled display. The board does not support stereoscopic output, nor can it connect to Genlock/Framelock boards like high-end Quadro cards.

The card and its feature set are somewhat unimpressive, considering its price tag, which means that in order for it to get a recommendation, it’s going to have to outperform ATI’s cards in the benchmarks and offer a better value. There’s nothing inherently wrong with FX1700 – it simply doesn’t seem to have a feature set which matches up against its price tag.

Image gallery

Related content

Comments

Comments
tierento 6 years ago

 I was just curious if there is any advantage using these cards over a high end gaming card. Is there any chance you can post a couple of bench marks of the latest NVidia and ATI offerins?

 

 

beezlebub 6 years ago

Yes, please help us by making that comparison.

 

I have to do graphics work (rendering) for my job and I am trying to decide what way to go.  I have to justify purchasing decisions to my pointed headed bosses (plural, unfortunately), and if I say that I want to spend $600 dollars on a card, they will have a tendency to say "But I saw this really cool card Y at BestBuy for $300".  I can tell them that it would be great to play Warcraft on that card, but I need to do real processing.  And they will say 'what's the difference'?   It would be great to show them a chart that shows a workstation card taking 1/10th of the time to do it compared to the consumer card, for only twice the money.  

  

chrisconnolly 6 years ago

This honestly depends on which application you'll be running.  If you have an application which is optimized for workstation GPU hardware with a profile on the driver level, it will likely run much faster on the workstation card.

 However, if it's not a supported application, it will likely run just as fast (or faster) on a less expensive gaming card.

Dave_HH 6 years ago

I'd like to just chime in here and say that it's great to see some new blood enter our registered user base.  Welcome to HH guys.  We appreciate your input and perspectives, positive, critical or otherwise.

Thanks!

clabrown 6 years ago

 Nice Review! My main concern is 2D and 3D CAD using Autodesk's Autocad 2008. Are any of the benchmarks you ran representative of that? Any possibility you could run a few that might apply to things like hidden line removal in a 3D Cad drawing, rotation or regen speed? 

 Thanks again.

chrisconnolly 6 years ago

The best place to look for CAD-like performance comparison would be to look at the CATIA benchmarks in Spec Viewperf 10 (in the review).


Crisis Causer 6 years ago

Throwing in an 8800GT/S and a 3850(70) would be very good ideas.  I can't think of any reason why not, unless you fear they would break for some strange reason.  Why not add them?

smev 6 years ago

I'm quite surprised with the power consumption of these cards. My whole office is using 189 watts right now. And thats with 3 17" lcds, 2 pcs, and my laptop powered on.  Those video cards use just as much as my setup ? thats crazy. I log all my power usage and use no more then $9 per month. These cards would cost a extra $10 a month or so to leave on 24/7

 Kevin

designmule 6 years ago

First off, the Studio Max benchmarks indicate that the tests were performed in OpenGL. Max has been optimized for Direct X for the last two or three versions. Nvidia even offers the Maxtreme drivers for D3D rather than OpenGL. You do not want to run Max in OpenGL if you don't have to.

Second: If you do 3D work in AutoCAD there maybe some benefit to using workstation grade video cards (due to optimized drivers, again D3D being preferred) but if you do only 2D work there is no need to spend the extra money.

Lastly: Don't go and buy one of these things for games. Typically the workstation grade cards are a generation behind the equivalent gaming card that they are based on (read: identical to). For example: the 8800 GT came out a few months ago, the Quadro based on the 8800 GT will be out in two or three months but will cost 5 or 6 times what the 8800 costs and perform no better.

I run a team of 3D Max users and there used to be a time that (for Max work) it was worthwhile to buy the workstation grade cards however this is no longer the case.

wang 6 years ago

I need to buy video card for my pc which is used for trading financial futures. I am considering ATI FireMV 2250, Matrox P690 and Nvidia NVS 290. I need only 2D and able to drive dual 1920X1200 monitors. I understand that I should use a fast video card (don't know why),  and I wonder whether I should consider a more expensive 3D card like ATI FireGL V5600 instead of the 2D cards that I am considering. Please enlighten. My rig is quad core 6600 and OCed to 3.48, 2GB of Crucial Ballistix, 444-12 at 850 and currently using a Matrox P650 (PCIe X16) driving dual Dell 24 inch LCD. OS is 32 bits XP pro.

tierento 6 years ago

Wang  You don't need a more expensive 3D card if your running a normal application. FireGL cards and the like only help if you are attempting to run 3D Games and Scientific programs. How does your Matrox card run? 

Does anyone here know if there are any optimized opengl drivers for the geforce 8800 gts range? I recently got one of these for my workstation and it runs slower on some opengl apps then my older ati card. Or does nvidia just suck when it comes to OpenGL?

 

 

Post a Comment
or Register to comment