The past decade has seen a sharp rise in heterogenous computing, processing or coprocessing using more than one processor type. One of the most prominent examples of heterogenous elements in HPC is the GPU computing ecosystem that has been fostered by NVIDIA and AMD. General-purpose GPU (GPGPU) adoption has become widespread in HPC, and student supercomputing Read more…
When it comes to employing physics in medicine, there are two major fields in terms of their relevance in clinical practice: medical imaging and radiation therapy. An Argentinian research duo addresses how these domains can benefit from high-performance computing techniques…
What do the Atari 2600 and Tianhe-1A have in common? It may be difficult to imagine, but both systems are examples of the use of cutting-edge graphic processers for their times. This demonstrates the fascinating evolution of the GPU, which today is one of the most critical hardware components of supercomputer architectures.
HPC programmers who are tired of managing low-level details when using OpenCL or CUDA to write general purpose applications for GPUs (GPGPU) may be interested in Harlan, a new declarative programming language designed to mask the complexity and eliminate errors common in GPGPU application development.
<img src=”http://media2.hpcwire.com/hpcwire/Penguin_Computing_logo_172x.jpg” alt=”” width=”101″ height=”59″ />Penguin Computing keeps finding increasing demand for servers that go heavy on the GPUs (or other coprocessors). Based on feedback from one such customer, it has designed the Relion 2808GT server, which it says now has the highest compute density of any server on the market.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/research_globe_150x.jpg” alt=”” width=”95″ height=”89″ />The top research stories of the week have been hand-selected from major science centers, prominent journals and leading conference proceedings. Here’s another diverse set of items, including whole brain simulation; a look at High Performance Linpack; the coming GPGPU cloud paradigm; heterogenous GPU programming; and a comparison of accelerator-based servers.
Anyone for a 5-teraflop personal computer?
Lost in the flotilla of vendor news at the Supercomputing Conference (SC11) in Seattle last month was the announcement of a new directives-based parallel programming standard for accelerators. Called OpenACC, the open standard is intended to bring GPU computing into the realm of the average programmer, while making the resulting code portable across other accelerators and even multicore CPUs.
The Portland Group’s directives-based approach to programming accelerators.
As we approach the four-year release anniversary of NVIDIA CUDA, arguably the ground zero of the GPGPU movement, there are many who have flirted, piloted and adopted the technology, but many more who are sitting on the sidelines for various reasons. In our work, we have come across many of the latter, and have thus compiled a list of the most common questions, concerns and assertions that preempt efforts to evaluate the technology.