As the non-profit standards group behind the push for wider adoption via easier use of accelerators, OpenACC has quite a big job ahead. Although analysts agree that accelerators sit along a comfortable adoption curve, usability, programmability and portability are key concerns, among others. Over the last couple of years, OpenACC has worked with user groups Read more…
Moments ago, NVIDIA announced its acquisition of the Portland Group (PGI) which has provided compiler and tools for the HPC-oriented C and Fortran markets. According to the company’s Sumit Gupta, this will allow them to further build their software portfolio and to push the adoption of GPUs through OpenACC in particular. NVIDIA and PGI will…
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/OpenACC_logo.bmp” alt=”” width=”139″ height=”47″ />PGI, Cray, and CAPS enterprise are moving quickly to get their new OpenACC-supported compilers into the hands of GPGPU developers. At NVIDIA’s GPU Technology Conference this week, there was plenty of discussion around the new HPC accelerator framework, and all three OpenACC compiler makers, as well as NVIDIA, were talking up the technology.
GPU maker NVIDIA is going to make its CUDA compiler runtime source code, and internal representation format public, opening up the technology for different programming languages and processor architectures. The announcement was made on Wednesday at the kick-off of the GPU Technology Conference Asia in Beijing, China.
Lost in the flotilla of vendor news at the Supercomputing Conference (SC11) in Seattle last month was the announcement of a new directives-based parallel programming standard for accelerators. Called OpenACC, the open standard is intended to bring GPU computing into the realm of the average programmer, while making the resulting code portable across other accelerators and even multicore CPUs.
The Portland Group’s directives-based approach to programming accelerators.
CUDA versus OpenMP for GPUs. What’s a developer to do?
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover Cray’s first XMT-2 supercomputer order, University of Delaware researchers’ extreme-scale architecture breakthrough, AMD’s OpenCL University Kit, Platform’s Grid Engine migration program, and PGI’s 2011 product refresh.
NVIDIA’s CUDA is easily the most popular programming language for general-purpose GPU computing. But one of the more interesting developments in the CUDA-verse doesn’t really involve GPUs at all. In September, HPC compiler vendor PGI (The Portland Group Inc.) announced its intent to build a CUDA compiler for x86 platforms. The technology will be demonstrated for the first time in public at SC10 this week in New Orleans.
In May, Intel announced the Many Integrated Core (MIC) architecture, with a development kit codenamed Knights Ferry. NVIDIA has announced and started to deliver its next-generation architecture, Fermi. PGI’s Michael Wolfe presents an in-depth comparison of the two designs.