Items tagged with GPGPU

For years, we've heard rumors that Intel was building custom chips for Google or Facebook, but these deals have always been assumed to work with standard hardware. Intel might offer a different product SKU with non-standard core counts, or a specific TDP target, or a particular amount of cache -- but at the end of the... Read more...
The supercomputing conference SC13 kicks off this week, which means we'll be seeing a great deal of information related to multiple initiatives and launches from all the major players in High Performance Computing (HPC). Nvidia is kicking off their own event with the launch of a new GPU and a strategic partnership... Read more...
One of the greatest problems standing between companies like AMD and widespread adoption of the GPU as a mainstream accelerator is that it's extremely difficult to effectively leverage the GPU. Even after years of development poured into CUDA, OpenCL, and yes, HSA, the barriers between CPU and GPU have remained... Read more...
To meet the needs of GP-GPU computing (General-Purpose computation on Graphics Processing Units) for environments where space is at a premium or a unit needs to be easily transportable, NextComputing has rolled out the Nucleus GP. Crammed into a mini-tower case (17.37” H x 16.75” W x 5.80” D), the “personal supercomputer”... Read more...
At its Fusion Development Summit this week, AMD discussed the concepts and capabilities it's targeting for future generations of AMD graphics cards. The company isn't sharing any specific architectural features, but even the general information it handed out is interesting. Demers began by talking about the history of ATI's graphics card... Read more...
Larrabee, Intel's once-vaunted, next-generation graphics card died years ago, but the CPU technology behind the would-be graphics card has lived on. Intel discussed the future of MIC/Knight's Corner today. After Larrabee was officially canceled, Intel repurposed the design and seeded development kits to appropriate... Read more...
The second day of the AMD Fusion Developer Summit began with a keynote from Microsoft’s Herb Sutter, Principal Architect, Native Languages and resident C++ guru. The gist of Herb’s talk centered around heterogeneous computing and the changes coming with future versions of Visual Studio and C++. One of the... Read more...
AMD's GPU solutions have come a long way since the company acquired ATI. The combined companies have competed very well against Nvidia for the past several years, at least at the consumer level. When it comes to HPC/GPGPU tools, however, Nvidia has had the market all to itself. Granted, the GPGPU market hasn't exactly exploded, but Nvidia... Read more...
Heather Mackey of Nvdia has written a new blog post discussing the company's hardware emulation equipment, thus affording us an opportunity to discuss a little-mentioned aspect of microprocessor development.  Although we'll be discussing Nvidia products in particular, both software tools (aka, simulation) and hardware emulation are vital... Read more...
Yesterday, at the Embedded Systems Conference, AMD announced a new embedded Radeon GPU, the E6760. Unlike its previous offerings in this segment, the E6760 is capable of driving up to six displays and supports OpenCL. "The AMD Radeon E6760 GPU provides customers with superior business economics through long lifecycle... Read more...
Much of the talk about AMD products has centered around Bulldozer of late, but Llano is on track for launch this year as well. AMD has released a new video pitting Llano against Intel's Sandy Bridge, with results that (un)surprisingly favor AMD's own solution. According to Godfrey Cheng, AMD's director of Client Technology, Llano was designed... Read more...
New CUDA 4.0 Release Makes Parallel Programming Easier Unified Virtual Addressing, GPU-to-GPU Communication and Enhanced C++ Template Libraries Enable More Developers to Take Advantage of GPU Computing SANTA CLARA, Calif -- Feb. 28, 2011 -- NVIDIA today announced the latest version of the NVIDIA CUDA Toolkit for developing parallel applications... Read more...
Six months ago, we covered a story in which Nvidia's chief scientist, Bill Dally, made a number of sweeping claims regarding the superiority of GPUs. Six months later he's again attacking traditional microprocessors with another broad series of accusations. As before, in our opinion, he uses far too broad a brush. Dally's basic claim is that... Read more...
If you're a fan of GPGPU computing this is turning out to be an interesting week. At SC10 in New Orleans, Intel has been demoing and discussing its Knights Ferry development platform. Knights Ferry, which Intel refers to as a MIC (Many Integrated Core) platform, is the phoenix rising rising from the ashes of Larrabee... Read more...
For the past 3.5 years or so, NVIDIA has ardently advocated the GPU as a computational platform capable of solving almost any problem. One topic the company hasn't targeted, however, is the tremendous performance advantage the GPU could offer malware authors. The idea that a graphics card could double as a security hole isn't something we've... Read more...
At the time of this writing, the FTC's investigation into Intel's alleged monopolistic abuses is on hold as the government attempts to negotiate a settlement with the CPU and chipset manufacturer. If these negotiations don't result in a deal by July 22, the case returns to court, with arguments currently scheduled to begin on September 15.... Read more...
Earlier this week, we covered news that a California PS3 owner, Anthony Ventura, had filed a class action lawsuit against Sony, alleging that the company's decision to terminate the PS3's Linux support via firmware update constituted a false/deceptive marketing practice. While most PS3 owners never took advantage of the system's Linux capabilities,... Read more...
Earlier this week, we covered news that a California PS3 owner, Anthony Ventura, had filed a class action lawsuit against Sony, alleging that the company's decision to terminate the PS3's Linux support via firmware update constituted a false/deceptive marketing practice.While most PS3 owners never took advantage of the system's Linux capabilities,... Read more...
Bill Dally, chief scientist at NVIDIA, has written an article at Forbes alleging that traditional CPU scaling and Moore's Law are dead, and that parallel computing is the only way to maintain historic performance scaling. With six-core processors now available for $300, Dally's remarks are certainly timely, but his conclusions are a bit... Read more...
When Intel announced its plans to develop a discrete graphics card capable of scaling from the consumer market to high-end GPGPU calculations,  it was met with a mixture of scorn, disbelief, interest, and curiosity. Unlike the GPUs at SIGGRAPH in 2008 (or any of the current ones, for that matter), Larrabee was a series of in-order x86... Read more...
When it comes to hardware-accelerated PhysX and the future of GPGPU computing AMD and NVIDIA are the modern-day descendents of the Hatfields and McCoys. Both companies attended GDC last week, where a completely predictable war broke out over PhysX, physics, developer payoffs, and gamer interest in PhysX (or the lack... Read more...
Back in late September of last year, NVIDIA disclosed some information regarding its next generation GPU architecture, codenamed "Fermi". At the time, actual product names and detailed specifications were not disclosed, nor was performance in 3D games, but high-level information about the architecture, its strong focus on compute performance,... Read more...
1 2 Next