OpenACC Starts to Gather Developer Mindshare

By Michael Feldman

May 17, 2012

PGI, Cray, and CAPS enterprise are moving quickly to get their new OpenACC-supported compilers into the hands of GPGPU developers. At NVIDIA’s GPU Technology Conference (GTC) this week, there was plenty of discussion around the new HPC accelerator framework, and all three OpenACC compiler makers, as well as NVIDIA, were talking up the technology.

Announced at the Supercomputing Conference (SC11) last November, OpenACC is an open standard API developed by NVIDIA, PGI, Cray, and CAPS, to provide a high-level programming framework for programming accelerators like GPUs. OpenACC uses compiler directives, which programmers insert into high-level source (e.g., C, C++ or Fortran), to tell the compiler to execute specific pieces of the code on the accelerator hardware.

GTC conference-goers had plenty of opportunity to encounter OpenACC this week. There two OpenACC tutorials for would-be developers, one by NVIDIA, and the other by CAPS enterprise. In addition, there were four other sessions hosted by Cray, CAPS, and PGI throughout the week. That’s not counting the numerous mentions OpenACC got during other presentations involving GPGPU programming.

The technology is still in its infancy though. The PGI and Cray compilers are pre-production versions. CAPS first commercial offering is just two weeks old.

The initial goal of OpenACC is to bring more developers (and codes) into GPU computing, especially those not being served by the lower-level programming frameworks like CUDA and OpenCL. While CUDA is widely used in universities and in the technical computing realm, and OpenCL is emerging as an open standard for parallel computing, neither is particular attractive to commercial developers.

Most programmers are used to writing high-level code that focuses on the problem at hand, without have to worry about the vagaries of the underlying hardware. That hardware independence is also what makes OpenACC attractive for codes that need to span different processor architectures.

That assumes, of course, that compiler will support multiple accelerator chips. The first crop of OpenACC-enabled compilers from PGI, CAPS and Cray only generate code for NVIDIA GPUs — not too surprising when you consider NVIDIA’s current dominance in HPC acceleration. However all of the compiler efforts plan to widen the aperture of hardware support.

CAPS is perhaps most aggressive in this regard. According to CAPS CTO François Bodin, his company plans to add OpenACC support for AMD GPUs, x86 multicore CPUs and even the Tegra 3 microprocessor, an ARM-GPU design that will be used to power an experimental HPC clusters at the Barcelona Supercomputing Center (BSC). Bodin also said that they have an Intel MIC (Many Integrated Core) port of OpenACC in the pipeline. All of these compiler ports should be available later this year.

PGI is keeping its OpenACC development plans a little closer to the vest. But according to PGI compiler engineer Michael Wolfe, they have received requests for OpenACC support for nearly every processor and co-processor used in high performance computing. The compiler maker will undoubtedly be developing some of these over the next year.

Likewise for Cray, although its OpenACC compiler support is focused on the underlying accelerators of its own XK6 supercomputers. At this point, that’s confined to NVIDIA GPUs. Cray (which also carries CAPS and PGI compilers for its customers) has a unique OpenACC offering in that it supports those directives in PGAS languages Co-Array Fortran and Unified Parallel C (UPC) on the XK6.

Besides its applicability to multiple hardware platforms, OpenACC is just plain easier to use when you have lots of existing code. For one thing, OpenACC lets you attack the acceleration in steps. CUDA and OpenCL ports usually require code rewrites of at least a sizeable chunk of the application being accelerated, using low-level APIs. With OpenACC, the programmer just has to insert high-level directives into existing source, and this can be done iteratively, gradually putting more and more of the code under OpenACC control. This, say, PGI’s Wolfe, is “a hell of a lot more productive” than the low-level approach.

Even at the national labs and research centers, where there are computer scientists aplenty, OpenACC is starting to be recognized as an easier path to bring acceleration to hundreds of thousands of line of legacy codes. NASA Ames is already using PGI’s compiler to speed up some of their CFD codes on one of their GPU clusters. And the upcoming deployments of multi-petaflop GPU-based supercomputers like “Titan” at Oak Ridge National Lab, should provide a lot more opportunities for OpenACC-based application development. Titan project director Buddy Bland is on record endorsing the technology for software development on that machine.

As with all parallel programming though, there’s no free lunch to be had. In general, the programmer is probably going to sacrifice some runtime performance (compared to CUDA, for example) for the sake of programmer productivity. But there seems to be a general consensus that intelligent use of directives can easily get you to within 10 or 15 percent the performance of a low-level implementation. But as CAPS’ Bodin explains, to get in that close, “you have to know what you’re doing.” On the other hand, as the compiler technology matures and developers get more adept with OpenACC, the performance gap could narrow even further.

The other problem is just a lack of accelerator diversity at the moment. With Intel MIC waiting in the wings, and AMD still pretty much a no-show with server-side GPUs, there’s no immediate need to support anything but NVIDIA’s GPU architecture right now. Worse, both Intel and AMD are backing other parallel computing frameworks that they are rolling into to their accelerator programs: OpenMP, Cilk Plus, and TBB for Intel; OpenCL and C++ AMP for AMD.

Fortunately, it probably doesn’t matter that Intel and AMD haven’t hopped on the OpenACC bandwagon. PGI and CAPS can still produce compilers targeting Intel MIC or AMD GPUs, or whatever else comes along. And as long as there are at least two compiler vendors offering such support, the community should be satisfied.

The end game, though, is to fold the OpenACC capabilities into OpenMP. If and when that happens, both Intel, AMD will throw their support behind it. OpenMP has been around for 15 years and is a true industry standard.

There is currently a Working Group on Accelerators in the OpenMP consortium, which is looking at incorporating accelerator directives into the next OpenMP release. And while those directives will be based on the OpenACC directives, they are not likely to be adopted as is. There’s a real risk that if the process gets drawn out much longer and OpenACC captures a critical mass of users, there will end up being two directive-based accelerator standards to choose from.

Twas ever thus.

Related Articles

CAPS Entreprise Now Supports OpenACC Standard

OpenMP Announces Improvements for Multicore and Accelerators

OpenACC Support Available With New PGI Accelerator Fortran and C Compilers

NVIDIA Announces Initial Results of Directives-Based GPU Computing Program

NVIDIA Eyes Post-CUDA Era of GPU Computing

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Machine Learning at HPC User Forum: Drilling into Specific Use Cases

September 22, 2017

The 66th HPC User Forum held September 5-7, in Milwaukee, Wisconsin, at the elegant and historic Pfister Hotel, highlighting the 1893 Victorian décor and art of “The Grand Hotel Of The West,” contrasted nicely with Read more…

By Arno Kolster

Google Cloud Makes Good on Promise to Add Nvidia P100 GPUs

September 21, 2017

Google has taken down the notice on its cloud platform website that says Nvidia Tesla P100s are “coming soon.” That's because the search giant has announced the beta launch of the high-end P100 Nvidia Tesla GPUs on t Read more…

By George Leopold

Cray Wins $48M Supercomputer Contract from KISTI

September 21, 2017

It was a good day for Cray which won a $48 million contract from the Korea Institute of Science and Technology Information (KISTI) for a 128-rack CS500 cluster supercomputer. The new system, equipped with Intel Xeon Scal Read more…

By John Russell

HPE Extreme Performance Solutions

HPE Prepares Customers for Success with the HPC Software Portfolio

High performance computing (HPC) software is key to harnessing the full power of HPC environments. Development and management tools enable IT departments to streamline installation and maintenance of their systems as well as create, optimize, and run their HPC applications. Read more…

Adolfy Hoisie to Lead Brookhaven’s Computing for National Security Effort

September 21, 2017

Brookhaven National Laboratory announced today that Adolfy Hoisie will chair its newly formed Computing for National Security department, which is part of Brookhaven’s new Computational Science Initiative (CSI). Read more…

By John Russell

Machine Learning at HPC User Forum: Drilling into Specific Use Cases

September 22, 2017

The 66th HPC User Forum held September 5-7, in Milwaukee, Wisconsin, at the elegant and historic Pfister Hotel, highlighting the 1893 Victorian décor and art o Read more…

By Arno Kolster

Stanford University and UberCloud Achieve Breakthrough in Living Heart Simulations

September 21, 2017

Cardiac arrhythmia can be an undesirable and potentially lethal side effect of drugs. During this condition, the electrical activity of the heart turns chaotic, Read more…

By Wolfgang Gentzsch, UberCloud, and Francisco Sahli, Stanford University

PNNL’s Center for Advanced Tech Evaluation Seeks Wider HPC Community Ties

September 21, 2017

Two years ago the Department of Energy established the Center for Advanced Technology Evaluation (CENATE) at Pacific Northwest National Laboratory (PNNL). CENAT Read more…

By John Russell

Exascale Computing Project Names Doug Kothe as Director

September 20, 2017

The Department of Energy’s Exascale Computing Project (ECP) has named Doug Kothe as its new director effective October 1. He replaces Paul Messina, who is stepping down after two years to return to Argonne National Laboratory. Kothe is a 32-year veteran of DOE’s National Laboratory System. Read more…

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blu Read more…

By Merle Giles

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakthrough Science at the Exascale” at the ACM Europe Conference in Barcelona. In conjunction with her presentation, Yelick agreed to a short Q&A discussion with HPCwire. Read more…

By Tiffany Trader

DARPA Pledges Another $300 Million for Post-Moore’s Readiness

September 14, 2017

The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies. Read more…

By Tiffany Trader

IBM Breaks Ground for Complex Quantum Chemistry

September 14, 2017

IBM has reported the use of a novel algorithm to simulate BeH2 (beryllium-hydride) on a quantum computer. This is the largest molecule so far simulated on a quantum computer. The technique, which used six qubits of a seven-qubit system, is an important step forward and may suggest an approach to simulating ever larger molecules. Read more…

By John Russell

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

GlobalFoundries: 7nm Chips Coming in 2018, EUV in 2019

June 13, 2017

GlobalFoundries has formally announced that its 7nm technology is ready for customer engagement with product tape outs expected for the first half of 2018. The Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This