Visit additional Tabor Communication Publications
June 19, 2012
HAMBURG, Germany, June 19 -- The Portland Group, a wholly-owned subsidiary of STMicroelectronics and a leading supplier of compilers and tools for high-performance computing (HPC), announced immediate availability of its PGI Accelerator Fortran and C compilers with support for the new OpenACC 1.0 specification for directive-based programming of GPUs and accelerators that allows programmers to provide hints or guidance in their programs.
“PGI continues to make accelerator programming more straight forward and productive for domain experts in science and engineering,” said Douglas Miles, Director of The Portland Group. “The OpenACC standard dovetails perfectly into our existing compiler products as a subset of the PGI Accelerator programming model. The PGI Accelerator compilers for Fortran 2003 and C include support for both high-level directive-based x64+GPU programming and explicit GPU programming using CUDA C/C++/Fortran language extensions.”
The OpenACC standard, unveiled at the SC11 conference in November, 2011, as founded by PGI, Cray, and NVIDIA, with support from CAPS, is fully compatible and interoperable with the NVIDIA® CUDA® parallel programming architecture. The OpenACC 1.0 specification was developed cooperatively by the founding members and is based in large part on the PGI Accelerator programming model. The OpenACC Application Program Interface (API) describes a collection of compiler directives to specify loops and regions of code in standard C, C++ and Fortran to be offloaded from a host CPU to an attached accelerator, providing portability across operating systems, host CPUs and accelerators. By using directives, developers can have a single code base that is multi-platform and multi-vendor compatible, a key advantage for cross-platform and multi-generation application development.
Using OpenACC, parallel programmers can offload code from a host CPU to an attached accelerator by using hints, known as directives, to identify areas of code suitable for acceleration. In addition to exposing parallelizing opportunities to the compiler, directives can also be used to specify the details of how to efficiently map loops to a particular accelerator and how to optimize data movement. Directives free the developer to focus on algorithms and application functionality while the compiler does the detailed work of offloading computations onto an accelerator. The principle benefit of directives is significant improvements to application performance without requiring modification to the underlying source code.
PGI Accelerator Compilers with OpenACC Support
First announced in 2008, the PGI Accelerator compilers augment the OpenACC standard in several areas to provide enhanced functionality and flexibility including:
· Auto-generation of optimized loop schedules.
· Automatic use of shared memory.
· Automatic sum reductions.
· Interoperability with CUDA Fortran and CUDA C/C++.
· PGI Unified Binary™ technology executable files that work in the presence or absence of an accelerator.
In addition, PGI Accelerator compilers include PGI’s complete suite of x86 host-performance optimization technologies including automatic SIMD vectorization, auto-parallelization, interprocedural analysis, function inlining and more.
Currently, PGI Accelerator Fortran and C99 compilers support x64+NVIDIA systems running under Linux, OS X and Windows; the compilers are supported on all Intel and AMD x64 processor-based systems with CUDA-enabled NVIDIA GPUs. OpenACC support will be included in PGI Release 2012 version 12.6 and later. It is available free of charge to PGI Accelerator licensees with a current PGI subscription. A free trial version is available from the PGI website atwww.pgroup.com/support/trial.htm. More information on the PGI Accelerator compilers with OpenACC is available at http://www.pgroup.com/accelerate. More information on the OpenACC API and standard can be found at www.openacc.org.
About The Portland Group (PGI)
The Portland Group, a wholly-owned subsidiary of STMicroelectronics (NYSE: STM), is the premier supplier of high-performance parallel Fortran, C, and C++ compilers and tools for workstations, servers, and clusters based on x64 processors from Intel and AMD, and GPU accelerators from NVIDIA. Further information on The Portland Group products can be obtained at www.pgroup.com, by calling Sales at (503) 682-2806, or by email to firstname.lastname@example.org.
Source: The Portland Group
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.