Tag: GPU computing
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/FirePro_w9000_small.png” alt=”” width=”115″ height=”96″ />Advanced Micro Devices (AMD) has launched six new FirePro processors for workstation users who want high-end graphics and computation in a single box. One of them promises a teraflop of double precision performance as well as support for error correcting code (ECC) memory. The new offerings also includes two APUs (Accelerated Processing Units) that glue four CPU cores and hundreds of FirePro GPU stream cores onto the same chip.
Scientists use latest Cray supercomputer to figure out how to make better ice cream.
With the right software, GPUs can speed chip design.
SoftLayer joins ranks of HPC cloud vendors.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Samaritan_demo_image_small.bmp” alt=”” width=”102″ height=”97″ />NVIDIA debuted its much-talked-about Kepler GPU this week, promising much better performance and energy efficiency than its previous generation Fermi-based products. The first offerings are mid-range graphics cards targeted at the heart of the desktop and notebook market, but the more powerful second-generation Kepler GPU for high performance computing is already in the pipeline.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/OpenCL_logo.png” alt=”” width=”80″ height=”76″ />As the two major programming frameworks for GPU computing, OpenCL and CUDA have been competing for mindshare in the developer community for the past few years. Until recently, CUDA has attracted most of the attention from developers, especially in the high performance computing realm. But OpenCL software has now matured to the point where HPC practitioners are taking a second look.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/digital_time_tunnel_small.jpg” alt=”” width=”118″ height=”95″ />With 2011 officially in the books, it’s time to offer a few predictions about the upcoming year in HPC. In general, I expect 2012 to continue the major trends we’ve seen over the past couple of years, namely the increased adoption of GPU computing into the mainstream and more parity of HPC capability around the world, as exemplified by China. There may, however, be one or two new trends to pop up.
The data deluge in the life sciences is no where more acute than at Chinese genomics powerhouse BGI, which probably sequences more DNA than any other organization in the world. To turn that data into something meaningful for genomic researchers, the institute has begun to employ GPU-accelerated HPC to greatly reduce processing times. In doing so, BGI was able to increase computational throughput by an order of magnitude or more.
Lost in the flotilla of vendor news at the Supercomputing Conference (SC11) in Seattle last month was the announcement of a new directives-based parallel programming standard for accelerators. Called OpenACC, the open standard is intended to bring GPU computing into the realm of the average programmer, while making the resulting code portable across other accelerators and even multicore CPUs.
There’s more than one way to build energy efficient supercomputers.