Anyone for a 5-teraflop personal computer?
Lost in the flotilla of vendor news at the Supercomputing Conference (SC11) in Seattle last month was the announcement of a new directives-based parallel programming standard for accelerators. Called OpenACC, the open standard is intended to bring GPU computing into the realm of the average programmer, while making the resulting code portable across other accelerators and even multicore CPUs.
The Portland Group’s directives-based approach to programming accelerators.
As we approach the four-year release anniversary of NVIDIA CUDA, arguably the ground zero of the GPGPU movement, there are many who have flirted, piloted and adopted the technology, but many more who are sitting on the sidelines for various reasons. In our work, we have come across many of the latter, and have thus compiled a list of the most common questions, concerns and assertions that preempt efforts to evaluate the technology.
Science code hits 1.87 petaflops on top-ranked Tianhe-1A.
Spring issue of EPCC News shines spotlight on GPUs in high performance computing.
AMD execs answer tough questions about tying the future of AMD to GPGPU movement.
AMD pitches FirePro V7800P against NVIDIA’s Tesla M2070Q.
The adoption curve for GPU computing is being slowed by programming and ISV challenges according to NVIDIA Chief Solution Architect.
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the Cray/Sandia partership to found a knowledge institute; RenderStream’s FireStream-based workstations and servers; NVIDIA’s latest CUDA centers; Reservoir Labs and Intel’s extreme scale ambitions; and Jülich Supercomputing Centre’s new hybrid cluster.