Perhaps the most significant ISC19 news for OpenACC wasn’t in its official press release yesterday which touted growing user traction and the notable addition of HPC leader Jack Wells, director of science, Oak Ridge Leadership Computing Facility, to a newly-created VP position. Instead, Nvidia’s announcement today that it would support Arm is bigger news opening up a whole new ecosystem of potential OpenACC users. OpenACC, of course, is the directives-based parallel programing model for optimizing applications on heterogeneous architectures.
Questions about Arm’s lack of a clear accelerator strategy have long percolated in HPC and the broader server community. Obtaining Nvidia support, the industry’s GPU leader, is a big step forward. Several high-end Arm systems have been stood up or announced in recent years. Presumably OpenACC support for Arm isn’t far behind. Noteworthy, Nvidia has had intermittent Arm efforts in the past (see HPCwire ISC19 article Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures).
OpenACC wouldn’t comment on forthcoming Arm support in a pre-ISC briefing (now we know why) with HPCwire but compiler support for Arm should emerge quickly. OpenACC did announce its 2019 annual meeting (September) will be hosted at RIKEN Center of Computational Science – a move that makes more sense now given that RIKEN is building Fugaku, Japan’s next flagship supercomputer which is an Arm-based system that doesn’t use accelerators.
Wells is quoted in the official release, “OpenACC is a very user focused organization where the community’s needs come first. With an ever-changing landscape of CPUs, GPUs, and other accelerators, scientists are constantly adopting and learning new programming models to continually advance their scientific endeavors…I’ve seen how valuable the OpenACC organization’s contributions are to enabling users to achieve better and faster scientific results, regardless of the computing architecture used.”
Also quoted is Mitsuhisa Sato, Deputy Director, RIKEN CCS, “Although our next flagship supercomputer ‘Fugaku’ will not have any accelerators, we believe that the OpenACC programming model will be important for our center’s research to explore the future direction of programming models to increase performance and portability. We are honored to host the OpenACC Annual Meeting this September at RIKEN R-CCS, and welcome attendees from Europe, North America, Japan, India, Korea, and other regions around the world.”
During the HPCwire briefing Wells said, “The focus of going to Japan is to engage our Japanese colleagues and getting them to be more involved in OpenACC in every way possible. There are many GPU-accelerated and ASIC-accelerated machines in Japan. At Oak Ridge, we look at the RIKEN machine in the context of accelerated node systems, the scalable vector extensions on RIKEN in some sense are similar to GPU programming. It’s not heterogeneous but there is a lot of [similar] features and commonality. I think we have a lot to learn from each other.”
That’s certainly true now. Wells also referenced the big Arm-based system (Astra, being built by HPE) planned at Sandia National Labs.
Quite aside from the Arm news, OpenACC has enjoyed steadily growing use in HPC in recent years, roughly tracking the rise of GPU-accelerated heterogeneous compute architectures. Summit supercomputer at OCLF is a good high-end example. Five of 13 applications prepped by CAAR (Center for Accelerated Application Readiness) for Summit, the fastest supercomputer in the world, used OpenACC (see HPCwire article, OpenACC Talks Up Summit and Community Momentum at SC18).
At SC18 last November, OpenACC reported that 150 important HPC applications had been accelerated with OpenACC. At ISC it reported more than 200 application reports have been accelerated. OpenACC membership has also grown somewhat. NERSC, for example, joined in April, at least in part to prepare for its Perlmutter project which will use AMD Epyc CPUs and Nvidia GPUs. The OpenACC Slack community has mushroomed to around 1200 and there have been 180,000 downloads of PGI’s free community compiler.
Bringing Wells onboard in a community development role is more evidence of OpenACC’s growing stature in HPC.
He told HPCwire, “I’m interested in expanding all [areas of] OpenACC adoption. We have a major long term research interest on accelerated node architecture, now at GPUs, and in the future other things like it, FPGAs or ASICs, and we need flexibility to maintain and extend our codes over time. OpenACC is one important option.”
Currently, the OpenACC technical committee is voting on several features that have been in the works. One planned new feature, deep copy, has been in beta with PGI. Duncan Poole, president of OpenACC said the next major release will be late in the year. The currently release is 2.7. “There will be a new release of the spec at SC19 time frame (November). The number is driven by how many key features [we include], so whether it’s 2.8 or 3.0 will be your clue as to how much we have managed to accomplish,” said Poole.
Over the years there has been discussion around OpenMP and OpenACC and whether or not the two ‘competing’ specs will or should merge. Poole said, “When I look across the technical committee many of us are members of other organizations. OpenMP has a much broader collection of objectives. Ours is fairly narrow. For now, [we] have a cordial relationship where we are kind of keeping each other on our toes.”
Link to OpenACC release: https://www.openacc.org/news/openacc-empowers-user-community