OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC impact at SC18. Most noteworthy is that five of 13 CAAR applications optimized for the Summit supercomputer used OpenACC to accelerate performance. The CAAR (Center for Accelerated Application Readiness) program at Oak Ridge National Laboratory was established to prepare applications to take advantage of Summit, now the fastest supercomputer in the world.
OpenACC also introduced a new release (2.7); reported there have been roughly 130,000 downloads of the free community OpenACC edition from PGI; and said more than 150 HPC applications, including five important commercial apps, have now been accelerated by OpenACC. Overall, the milestones reported by OpenACC suggest strength.
You may remember OpenACC was developed by Cray, Nvidia, Caps Enterprise, and PGI and the first spec (1.0) was delivered at SC11 and targeted GPUs. It followed OpenMP (1997) which at the time performed a similar function but focused on host x86 CPUs. The growing use of parallel programing tools such as OpenACC and OpenMP has tracked the steady rise of accelerator-based heterogeneous computing in recent years. Many wonder whether/when OpenACC and OpenMP will merge into a single spec; this was an often-stated goal at OpenACC’s start and has been talked about ever since. (See HPCwire article, NVIDIA Eyes Post-CUDA Era of GPU Computing). The slide below summarizes OpenACC’s SC18 announcements.
Summit is a good example of the emerging CPU/GPU hybrid compute architecture paradigm in which the CPU is largely a supervisor and the speed-up derives from parallel processing in GPUs. Summit nodes, for example, consists of two IBM Power9 CPUs, six NVIDIA V100 GPUs.
OpenACC is understandably excited by its success with the CAAR applications. Parallelizing and scaling these codes to work on such a large, novel system is a challenge. In its official SC18 announcement, OpenACC offered testimonials from leaders of the CAAR efforts which used OpenACC:
- Energy Exascale Earth System Model (E3SM) used for high -resolution simulation of the global coupled climate system. “The CAAR project provided us with early access to Summit hardware and access to PGI compiler experts. Both of these were critical to our success. PGI’s OpenACC support remains the best available and is competitive with much more intrusive programming model approaches.” – Mark A. Taylor, Multiphysics Applications, Sandia National Laboratory
- LSDalton used in quantum chemistry. “Using OpenACC, we see large performance gains with very little effort. GPU acceleration varies over the course of our simulations due to differing fragment sizes, but is typically 3x–5x. On Summit we can now do simulations of several thousand atoms, compared to maybe 800 on Titan.” – Dmytro Bykov, Computational Scientist, Oak Ridge National Laboratory
- FLASH used for Astrophysics. “We’re using OpenACC on Summit to accelerate our most compute-intensive kernels. We love OpenACC interoperability and how this allows us to use multiple methods to perform memory placement and movement. CPU+GPU performance of a 288 species network on Summit, something impossible to do on Titan, is 2.9x faster than CPU only.” – Bronson Messer, Senior Scientist, Oak Ridge National Laboratory
- GTC used for particle turbulence simulations for sustainable fusion reactions in ITER. “Using OpenACC, our scientists were able to achieve the acceleration needed for integrated fusion simulation with a minimum investment of time and effort in learning to program GPUs.” – Zhihong Lin, Professor and Principal Investigator, UC Irvine
- XGC code for enabling multiphysics magnetic fusion reactor simulator. “Using a combination of CUDA and OpenACC for our most compute-intensive kernels, the GPU-accelerated version of XGC delivers over 11x speed-ups compared to CPU-only execution when running at scale on 2048 nodes of ORNL’s new Summit supercomputer.” – C-S Chang, Principal Investigator, Princeton Plasma Physics Lab, Princeton University.
Half in jest, Sunita Chandrasekaran, director of user adoption for OpenACC, told HPCwire the organization is now looking to be part of an effort that wins a Gordon Bell Prize. It turns out one of the finalists this year (University of Tokyo, Earthquake simulation) did use OpenACC to start with in their submission but switched to CUDA for the final submission to get the most performance.
It’s probably fair to say the announced 2.7 release is incremental. It adds ‘self’ clause capability on compute constructs enabling use of both multicore and accelerator in the same program without dynamically changing the device. Among other changes are the addition of ‘readonly’ modifier in the ‘copyin’ and ‘cache’ functionality, and arrays are now allowed in reductions for C/C++ struct or class and Fortran derived types.
Looking ahead, Chandrasekaran said, “We have been talking about deep copy [and] there is interest from application developers for that.” A beta deep copy feature is being worked on and is available newly released PGI 18.10 Community Edition. Chandrasekaran declined to discuss other planned features.
SUSE, C-DAC and Osaka University are the three most recent institutions to join the OpenACC membership. OpenACC says the new members will “contribute to technical and marketing committees, shape the OpenACC specification to support their research and will help grow a community of OpenACC users who aim to perform more science and research and less programming effort.”
OpenACC continues to ramp up its outreach efforts. “GPU Hackathons which started as OpenACC-only events under Oak Ridge National Laboratory umbrella have now grown into a series of events with 160 teams participating from all around the world. A majority of the teams choose OpenACC to start programming GPU, but any GPU programming models and tools are welcome at the events,” according to the release.