Tag: GPU computing
With the right software, GPUs can speed chip design.
SoftLayer joins ranks of HPC cloud vendors.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Samaritan_demo_image_small.bmp” alt=”” width=”102″ height=”97″ />NVIDIA debuted its much-talked-about Kepler GPU this week, promising much better performance and energy efficiency than its previous generation Fermi-based products. The first offerings are mid-range graphics cards targeted at the heart of the desktop and notebook market, but the more powerful second-generation Kepler GPU for high performance computing is already in the pipeline.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/OpenCL_logo.png” alt=”” width=”80″ height=”76″ />As the two major programming frameworks for GPU computing, OpenCL and CUDA have been competing for mindshare in the developer community for the past few years. Until recently, CUDA has attracted most of the attention from developers, especially in the high performance computing realm. But OpenCL software has now matured to the point where HPC practitioners are taking a second look.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/digital_time_tunnel_small.jpg” alt=”” width=”118″ height=”95″ />With 2011 officially in the books, it’s time to offer a few predictions about the upcoming year in HPC. In general, I expect 2012 to continue the major trends we’ve seen over the past couple of years, namely the increased adoption of GPU computing into the mainstream and more parity of HPC capability around the world, as exemplified by China. There may, however, be one or two new trends to pop up.
The data deluge in the life sciences is no where more acute than at Chinese genomics powerhouse BGI, which probably sequences more DNA than any other organization in the world. To turn that data into something meaningful for genomic researchers, the institute has begun to employ GPU-accelerated HPC to greatly reduce processing times. In doing so, BGI was able to increase computational throughput by an order of magnitude or more.
Lost in the flotilla of vendor news at the Supercomputing Conference (SC11) in Seattle last month was the announcement of a new directives-based parallel programming standard for accelerators. Called OpenACC, the open standard is intended to bring GPU computing into the realm of the average programmer, while making the resulting code portable across other accelerators and even multicore CPUs.
There’s more than one way to build energy efficient supercomputers.
Jaguar’s days as a CPU-only supercomputer are numbered. Over the next year, the 2.3 petaflop machine at the Oak Ridge National Lab will be upgraded by Cray with the new NVIDIA “Kepler” GPUs, producing a system with about 10 times Jaguar’s peak performance. The transformed supercomputer will be renamed Titan and should deliver in the neighborhood of 20 peak petaflops sometime in late 2012.
Academic consortium buys GPU-equipped SGI cluster.