OpenMP Takes To Accelerated Computing

By Michael Feldman

November 27, 2012

OpenMP, the popular parallel programming standard for high performance computing, is about to come out with a new version incorporating a number of enhancements, the most significant one being support for HPC accelerators. Version 4.0 will include the functionality that was implemented in OpenACC, the accelerator API that splintered off from the OpenMP work, as well as offer additional support beyond that. The new standard is expected to become the the law of the land sometime in early 2013.

In high performance computing, OpenMP serves as the de facto parallel programming framework for shared memory environments — that is, code that shares a coherent memory space within a server node. Combined with MPI, which supports distributed parallelism across many nodes, the two standards provide the software foundation for most HPC applications.

Since the advent of multicore CPUs, and more recently attached accelerators like GPUs, parallelism at the node level has skyrocketed. While OpenMP has supported multicore processors for most of its 15-year history, support for accelerators is just now being folded in.

Some would say a little late. GPU computing has been around for six years, thanks mostly to the efforts of NVIDIA, which has spearheaded this new programming paradigm. In fact, the GPU maker’s early and mostly unchallenged entrance into HPC acceleration led to the emergence of a number of other parallel programming frameworks, including NVIDIA’s own CUDA software toolset, OpenCL, and more recently, OpenACC.

OpenACC is somewhat of a historical accident. Although the OpenMP accelerator work began a few years ago, at that time NVIDIA had the only credible products on the market, namely its Tesla GPU offerings. Customers of those products wanted a directives-based API for current development work that offered a higher level framework than either CUDA or OpenCL, and had at least some promise of hardware independence. At the time, it looked like that until Intel brought its Xeon Phi coprocessor to market there would be no OpenMP accelerator standard. So NVIDIA, along with Cray, and compiler-makers CAPS enterprise and The Portland Group Inc (PGI), developed OpenACC based on some of the initial OpenMP effort.

As a result of this common history, both OpenMP and OpenACC offer a directives based approach to parallel programming, and in the case of developing codes for accelerators, share many of the same capabilities. Intel senior scientist and OpenMP evangelist Tim Mattson says the emerging OpenMP accelerator standard is more or less a superset of the OpenACC API. According to him, porting an OpenACC code to OpenMP will be relatively easy. “Moving from OpenACC to the OpenMP directives as defined in the current Technical Report, is trivial,” says Mattson.

The Technical Report he refers to is the document released by the OpenMP Architecture Review Board (ARB) three weeks ago, the idea being to gather user and vendor feedback before incorporating the new directives into OpenMP 4.0. Assuming all goes as planned the final version of the accelerator directives will be slid into OpenMP 4.0 by the first quarter of 2013, first as a release candidate, and soon thereafter as an official standard. The new version will also have a number of other enhancements including thread affinity (enables users to define where to execute OpenMP threads) initial support for Fortran 2003, SIMD support (to vectorize serial and parallelized loops), user-defined reductions, and sequentially consistent atomics.

The Technical Report was a product of the ARB working group on accelerators, which included all four OpenACC backers. So it’s a given that GPUs will be well-supported in the OpenMP going forward. But since the working group also included x86 vendors Intel and AMD, DSP provider Texas Instruments, as well as hybrid computer-maker Convey, there is likely to be something in the new standard for everyone. The goal is to allow developers to write target-independent applications that can take advantage of the latest GPUs from NVIDIA and AMD, Intel’s Xeon Phi, FPGAs, and even the TI DSP chips. The directives are also designed to allow for future types of accelerators.

The trick is to design the compiler directives abstractly enough to hide the hardware dependencies for a diverse group of architectures, but not so ethereal so that it becomes impossible for compilers to generate efficient, performant code from them. Assuming the compiler implementations from Intel, PGI, CAPS, and others live up to that ideal, the developer community will likely gravitate toward the new OpenMP standard.

For the time being though, it’s business as usual for the OpenACC backers. A draft of version 2.0 was made public for comment at the recent Supercomputing Conference (SC12). In concert, both PGI and CAPS announced OpenACC compiler support for the latest accelerators — Intel’s Xeon Phi coprocessor, NVIDIA’s K20/K20X GPUs, AMD APUs and GPUs, and the ARM-based CARMA platform. For the near-term, at least, both OpenACC and OpenMP accelerator support looks like it will move forward in tandem.

How long that lasts is not clear. But given the propensity of both developers and software toolmakers to support monolithic standards, at some point the two frameworks should merge. “It’s now in our camp of OpenMP to bring it back together as one happy family,” says Mattson.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

HPC/AI User Forum to Hold Fall Meeting in Reston, Virginia (Updated Agenda)

August 20, 2025

The 89th HPC/AI User Forum will be held on September 3rd and 4th in Reston, Virginia. The user Forum is a popular event that provides opportunities for interaction between speakers and attendees. The meeting will feat Read more…

Quantinuum Debuts New SW Stack in Prep for Helios

August 20, 2025

Trapped ion quantum computing specialist Quantinuum today rolled out details of its new software stack to be used with Helios, Quantinuum’s third-gen QPU expected this year and planned to deliver ~50 logical qubits. Am Read more…

Harnessing Data Center Heat into Reusable Energy: A Sustainability Game Changer

August 19, 2025

As AI and HPC reshape industries at a breakneck pace, the thermal intensity of the hardware powering this transformation soars. Modern data centers have evolved into dense, heat-generating ecosystems, increasing the impo Read more…

Summer Reading: Tensor Networks’ Growing Use in Quantum Computing

August 19, 2025

The use of tensor networks (TN) in quantum computing has been steadily expanding. Initially applied to quantum many-body simulations, TNs have broadened their application scope to quantum information theory and quantum c Read more…

Building AI Foundation Models to Accelerate the Discovery of New Battery Materials

August 19, 2025

A University of Michigan-led team is using Argonne supercomputers to build massive foundation models that aim to advance battery materials research With access to ALCF’s powerful Aurora and Polaris systems, research Read more…

ORNL Chooses IQM as its First On-Prem Quantum Computer

August 19, 2025

IQM, the Finish supplier of superconducting-based quantum computers, and Oak Ridge National Laboratory have announced plans for the lab to purchase a new 20-qubit IQM Radiance system. It would be the first on-premise qua Read more…

Harnessing Data Center Heat into Reusable Energy: A Sustainability Game Changer

August 19, 2025

As AI and HPC reshape industries at a breakneck pace, the thermal intensity of the hardware powering this transformation soars. Modern data centers have evolved Read more…

ORNL Chooses IQM as its First On-Prem Quantum Computer

August 19, 2025

IQM, the Finish supplier of superconducting-based quantum computers, and Oak Ridge National Laboratory have announced plans for the lab to purchase a new 20-qub Read more…

White House Considers Intel Investment Following Tan Meeting

August 15, 2025

The Trump administration is in talks with Intel about a potential U.S. government stake in the company, according to recent reporting from Bloomberg. The propos Read more…

NSF Unveils Vision for AI to Transform Science and Engineering

August 15, 2025

Artificial intelligence continues to reshape how science and engineering are done from the ground up. The U.S. National Science Foundation (NSF)  is putting se Read more…

From Trust to Scale in LLMs: Insights from Key Leaders at TPC25

August 12, 2025

In the TPC25 session Science Updates from Key TPC Leaders, two distinguished speakers shared different yet complementary perspectives on the future of large lan Read more…

GPT-5 Arrives As OpenAI Explores $500B Valuation and Ships Open Models

August 11, 2025

It’s been a monumental week for OpenAI (August, 4th, 2025). Tuesday saw the release of a new open weight family of models, gpt-oss. Midweek, news broke that t Read more…

Flush with Cash, What’s Next for D-Wave Quantum?

August 8, 2025

D-Wave Quantum (NYSE: QBTS), the long-time champion of quantum annealing-based quantum computing, is on a roll. While its stock price dipped yesterday after sec Read more…

TPC25 Highlights AI’s Expanding Role: Multimodal Data, Model Evaluation, and Non-LLM Architectures

August 6, 2025

How do we speed up AI-powered scientific discovery without sacrificing control? Is it possible to trace a language model’s answers back to the data it was tra Read more…

Flush with Cash, What’s Next for D-Wave Quantum?

August 8, 2025

D-Wave Quantum (NYSE: QBTS), the long-time champion of quantum annealing-based quantum computing, is on a roll. While its stock price dipped yesterday after sec Read more…

Intel Officially Throws in Training Towel, Will Focus on Edge and Agentic AI

July 14, 2025

As reported in The Oregonian, Intel CEO Lip-Bu Tan shared during a global employee Q&A that he believes it's already "too late" for Intel to catch up in the Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

China’s Moonshot AI Releases Trillion Parameter Model Kimi K2

July 16, 2025

Moonshot AI, a Beijing startup backed by Alibaba, has released a new open weight large language model. Called Kimi K2, the model has one trillion parameters, ma Read more…

Intel Begins Factory/Foundry Layoffs

June 17, 2025

The Oregon Live website reports that Intel plans to begin factory worker layoffs in July. According to a memorandum verified by Oregonian/OregonLive, the layoff Read more…

IBM Launches Qiskit Advocate Program, 2.0

July 29, 2025

IBM announced in a blog yesterday that it was launching an expanded version of its Qiskit Advocate program intended to help grow the community of active contrib Read more…

Huawei Challenges Nvidia’s AI Dominance with New Chip

April 29, 2025

As geopolitical tensions reshape technology supply chains and U.S. export controls tighten, new challenges and opportunities arise that are transforming the glo Read more…

HPC Finally Found Its Killer App – Can It Survive?

June 6, 2025

Where We Are: On June 3, Intersect360 Research released our most recent market numbers for the HPC market. At the top level, things look great. We found a 24.1% Read more…

Leading Solution Providers

Contributors

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Colossus AI Hits 200,000 GPUs as Musk Ramps Up AI Ambitions

May 13, 2025

Elon Musk’s Colossus AI infrastructure, said to be one of the most powerful AI computing clusters in the world, has just reached full operational capacity. De Read more…

ISC2025 Keynote: How and Why HPC-AI is Driving Science

June 11, 2025

The sprawling opening ISC2025 keynote, presented by Mark Papermaster, CTO of AMD, and Scott Atchley, CTO of the National Center for Computational Science and Oa Read more…

NSF Budget Dispute Threatens Progress on TACC’s Horizon System

April 22, 2025

Construction of the Horizon supercomputer at TACC could be delayed—or scrapped entirely—if a federal budget dispute isn’t resolved soon. According to Scie Read more…

Xanadu Sets Sights on Fault-Tolerant Quantum Computing Data Center by 2029

May 22, 2025

If all goes according to plan, sometime around 2029 there will be a Xanadu quantum computing data center, on an acre or two of land, with 1000s of quantum serve Read more…

TOP500: JUPITER Joins the List at Number Four, but Top Three Hold Their Position

June 10, 2025

Amid the rise of GPU-based AI supersystems, the TOP500 List continues to provide a curated historical measure of system performance. Rather than counting raw AI Read more…

Shutterstock 1194728515

Have You Heard About the Ozaki Scheme? You Will

April 17, 2025

Using accelerators in HPC has pushed performance to new levels. Starting with early GPUs, the ability to take advantage of the parallel processing hardware (in Read more…

U.S. House to Hold Hearing on National Quantum Act Reauthorization

May 1, 2025

Next Tuesday, there will be a full House Science Committee hearing — From Policy to Progress: How the National Quantum Initiative Shapes U.S. Quantum Technolo Read more…

  • arrow
  • Click Here for More Headlines
  • arrow