A Comparison of Heterogeneous and Manycore Programming Models

By Yonghong Yan, Barbara M. Chapman and Michael Wong

March 2, 2015

The high performance computing (HPC) community is heading toward the era of exascale machines, expected to exhibit an unprecedented level of complexity and size. The community agrees that the biggest challenges to future application performance lie with efficient node-level execution that can use all the resources in the node. These nodes might be comprised of many identical compute cores in multiple coherency domains, or they may be heterogeneous, and contain specialized cores that perform a restricted set of operations with high efficiency. In general, heterogeneity and manycore processors are both expected to be common. Although we anticipate physically shared memory within each node, access speeds will vary considerably between cores and between types of memory, imposing deeper memory hierarchies and more challenging NUMA effects for performance optimizations. Further, the node may present distinct memory address spaces to different computing elements, as demonstrated in today’s accelerator architectures, making explicit data movement necessary.

A critical challenge for using the massive parallel resources is the provision of programming models that facilitate the expression of the required levels of concurrency to exploit all of the hardware resources in the node, while permitting an efficient implementation by the system software stack. Node-level parallel models range from threading primitives such as pthreads, C++11 threads and the Boost thread library for CPU/SMPs, and low-level models for manycore accelerators such as proprietary CUDA from NVIDIA and open standard OpenCL, to high-level models including directive-based programming models such as OpenMP* and OpenACC*, the latter of which was started to support GPU accelerators; Microsoft Visual C++ parallel programming on Windows platforms and specifically tailored for C++; and other options such as Cilkplus, TBB and vector primitives.

A programming model sits between the application and the hardware architecture. Languages’ features will either need to virtualize certain hardware capabilities, or to simplify the representation of the algorithms and parallelism patterns that the application uses. The design of language features and interfaces must agree on the details of the abstractions. In this article, we summarize a comprehensive list of features for parallel programming models to support recent and future heterogeneous and manycore architectures. We then make comparisons of several programming models against these features. [For additional background and relevance, we refer you to the work of our colleague Michael Wolfe who writes about parallel programming concepts here and discusses multiple levels of parallelism here and here.]

Features

The list of features and their categories are inspired and developed from the execution model of Habanero Extreme Scale Software Research Project at Rice University directed by Vivek Sarkar.

Parallelism: A model should allow users to specify different kinds of parallelism that could easily be mapped to parallel architectures and that facilitate expression of parallel algorithms. At least four parallelism mechanisms should be considered for comparison: 1) Data parallelism (e.g., a parallel loop nest), which typically maps well to manycore accelerators and vector architectures depending on the granularity of each data parallel unit; 2) Asynchronous task parallelism, which easily expresses certain parallel algorithms, e.g., irregular and recursive parallelism; 3) Data/event-driven computation, which captures computations characterized as data flow rather than control flow; and 4) Parallelism on host and/or device. Recent accelerator-based architectures attach computational devices as coprocessors and rely on offloading model to exploit their capabilities. This category differentiates those models that support parallelism only on host and/or parallelism on accelerators.

Architecture abstraction and data/computation binding: Optimizing parallel applications on shared memory ccNUMA machines is challenging. The effects of cache coherence, e.g., false sharing, and NUMA complexity impact application performance in ways that varies widely across systems. With recent architectures exhibiting deeper memory hierarchies and possible distinct memory/address spaces, the issue becomes more challenging. A programming model could help in this aspect by providing: 1) architecture abstractions, e.g., an “explicit” notion of NUMA memory regions that matter for performance; 2) syntax to support binding of computation and data by users to control or to influence runtime behavior favoring the principle of locality; or 3) means to specify explicit data mapping and movement for sharing data between different memory and address spaces.

Synchronizations: A programming model should provide constructs for supporting coordination between various parallel work units, for example barrier, reduction and join operations for synchronizing parallel threads or tasks, point-to-point signal and wait operations to create pipeline or workflow executions of parallel tasks, and phase-based synchronization for streaming computations.

Mutual exclusion: Interfaces such as locks and mutexes are still widely used for protected data access. A model should provide language constructs for easily creating exclusive data access mechanism needed for parallel programming, and should define appropriate semantics for mutual exclusion to reduce the opportunities of introducing deadlocks. Architectural changes such as transactional memory provide alternatives to achieve similar data protection, which could also be part of the interface of a parallel model.

Other features: Error handling, tools support, and multiple language/library bindings are also important features for parallel programming. Error handling provides support for dealing with faults from the user program or the system to improve system and application resilience. Support for tools, e.g., performance profiling and debugging tools, is essential to improve the productivity of parallel application development and performance tuning. For parallel high performance computing, C, C++ and Fortran are still the dominant base languages. While functional languages can provide a cleaner abstraction for concurrency, it is not easy to rewrite all legacy code and library to a new base language. Ideally, a model would support at least these three languages.

We have used these features to compare a list of commonly used node-level programming models for parallel and high performance computing that have commercial implementations, including OpenMP, Intel Cilkplus, Intel TBB, OpenACC, Nvidia CUDA, OpenCL, C++11 and pthreads. Pthreads and C++11, which was extended to support multithreading with threads, were chosen as baseline languages and library that provide core functionalities to enable other high-level language features. CUDA (only for NVIDIA GPU) and OpenCL are considered as low-level programming interfaces for recent manycore and accelerator architectures that can be used as user-level programming interfaces or intermediate-level interfaces for the compiler-transformation targets of high-level interfaces. The recent OpenACC standard created as a high-level interface for manycore accelerators helps users gain early experience of directive-based interfaces. Intel TBB and Cilkplus are task based parallel programming models used on multi-core and shared memory systems that have quality implementations and commercial support as well as open-source implementations. OpenMP is a comprehensive, well-developed standard that has been driven by industry, government labs and academia. It has multiple commercial and quality open-source implementations supporting hardware from many vendors and much existing scientific code is already using it.

Comparisons

The comparisons are shown in Figure 1 and 2. For parallelism support, asynchronous tasking or threading is still the foundational parallel mechanism that is supported by all the models, and data parallelism (such as OpenMP worksharing) can be implemented using the basic asynchronous tasking and join synchronization. Overall, OpenMP provides the most comprehensive set of features to support a wide variety of parallelism patterns and architectures, on both host and devices, while others concentrate on support parallelism on either host or device only. For accelerators such as NVIDIA GPUs, OpenACC and CUDA provide language constructs that support these parallelism patterns. For architectural abstraction, only OpenMP provides constructs to model memory hierarchy (as places) and the binding of computation with data (proc_bind clause). Each of the programming models that support manycore architectures has its own way of organizing the massive threading capabilities (x1000) into a multiple-level thread hierarchy, e.g., OpenMP’s teams of threads, OpenACC’s gang/worker/vector clause, CUDA’s blocks/threads and OpenCL’s work groups. Models that support devices and offloading computation provide constructs to specify data movement between discrete memory spaces, models that do not support other compute devices do not require them.

Comparison models Fig1

Figure 1: Comparison of heterogeneous and manycore programming models – Parallelism patterns and Architecture abstractions and data/computation binding

In Figure 2, we show the feature comparison in other categories. Of the three commonly used synchronization operations, i.e., barrier, reduction and join operation, only OpenMP supports all of them. Note that since Cilk and Intel TBB emphasize tasks rather than threads, the concept of a thread barrier makes little sense in their model, so its omission is not a problem. Locks and mutexes are still the most widely used mechanism for providing mutual exclusion. Most of the models have C and C++ bindings, but only OpenMP and OpenACC have Fortran bindings. Most models do not provide dedicated mechanisms for error handling and many leverage C++ exceptions for that purpose. As an exception, OpenMP has its “cancel” construct for this purpose, which supports an emerging error model. For tools support, Cilkplus, CUDA, and OpenMP are three implementations that provide a dedicated tool interface or software. Many of the “host only” models can use standard system profiling tools such as Linux perf. In some cases vendor or third party profiling tools also have explicit support for OpenMP analyses.

Comparison models Fig2

Figure 2: Comparison of heterogeneous and manycore programming models – Synchronizations, Mutual exclusions, Language binding, Error handing and Tool support

Conclusion

From the comparison, OpenMP clearly supports the most comprehensive set of features. It has evolved rapidly such that now it supports the emerging heterogeneous and manycore architectures including accelerators, as well as the conventional shared memory SMP, NUMA and multicore systems. The OpenMP Architecture Review Board’s new Mission Statement, “Standardize directive-based multi-language high-level parallelism that is performant, productive and portable,” indicates a clear direction to support a broad form of parallelism beyond HPC workloads. The unique directive-based approach of OpenMP, which the OpenACC model for accelerators also borrows, enables a productive parallel programming model that significantly reduce migration and porting efforts for applications since it does not require that they be rewritten in a new language. OpenMP is the only directive based specification that allows the exploitation of the parallelism available in both the multiple CPUs in a host node and in attached processors using a single language.

While a specification, whether a de facto standard or a formal standard, defines the interfaces for writing parallel programs, it is only used and adopted when there are quality implementations. Many more challenges for implementing a high-level programming model such as OpenMP and OpenACC exist than those for realizing models that are only library-based (TBB and pthreads) or consist of a small set of extensions to standard languages (OpenCL, CUDA and Cilkplus). To the best of our knowledge at the time of writing, the latest OpenACC standard, version 2.0, has commercial implementations from PGI and Cray for NVIDIA GPU architectures. The latest OpenMP standard already has partial support in the latest (or beta) GNU compiler, Oracle Solaris Studio, and Intel Parallel Studio. There is sustained effort in implementing full OpenMP support in the Clang/LLVM compiler to intersect the arrival of the standard. Pathscale is also working aggressively to release a compiler in the near future that supports the latest version of both OpenACC and OpenMP.

The choice of parallel model for a particular application and/or hardware architecture depends on the programmability and portability of the model as well as the performance delivered to users by the implementations. For example, GPGPU accelerator support in high-level programming interfaces, one of the urgently needed features of parallel node-level programming, is now available in both OpenACC and OpenMP, with OpenACC being developed earlier and with more existing compiler support. However, a wide variety of users still use the proprietary CUDA model despite its productivity challenges because it currently delivers higher performance than the high-level programming models in the places where it is available. Thus the existence of multiple programming models, each having its own unique set of features that serve the specific needs of users and applications, and each having different degree of tradeoff between productivities and performance, is still necessary.

About the Authors

YonghongYanDr. Yonghong Yan is an Assistant Professor from the Oakland University, an OpenMP Architectural Review Board (ARB) representative, and chair of OpenMP Interoperability language subcommittee. Starting from his Ph.D. study, Yonghong has been working extensively on multiple compiler/runtime projects, including the recent OpenMP and OpenACC compiler based on OpenUH/Open64, and the Habanero-C (X10-dialect) compiler and PACE compiler based on ROSE/LLVM when he was a postdoc at Rice University.

 

 

BarbaraChapmanBarbara Chapman is a Professor of Computer Science at the University of Houston, Texas, where she also directs the Center for Advanced Computing and Data Systems. Chapman has performed research on parallel programming languages and related implementation technology for over 20 years and has been involved in the OpenMP directive-based programming standard development since 2001. She also contributes to the OpenSHMEM and OpenACC programming standards efforts. Her research group has developed OpenUH, a state-of-the-art open source compiler that is used to explore language, compiler and runtime techniques, with a special focus on multi-threaded programming. Dr. Chapman’s research also explores optimization of partitioned global address space programs, strategies for runtime code optimizations, compiler-tools interactions and high-level programming models for embedded systems.

MichaelWongMichael Wong is the CEO of the OpenMP Corporation, a consortium of 26 member companies that hold the de-facto standard for parallel programming specification for C/C++ and FORTRAN. He is the IBM and Canadian Head of delegation to the C++ Standard, and Chair of the WG21 Transactional Memory group. He is the co-author of a number of C++/OpenMP/TM features and patents. He is the past C++ team lead to IBM´s XL C++ compiler, C compiler and has been designing C++ compilers for twenty years.  Currently, he is leading the C++11 deployment as a senior technical lead for IBM. His current research interest is in the area of parallel programming, C++ benchmark performance, object model, generic programming and template metaprogramming. He is a frequent speaker at various technical conferences and serves on the Programming Committee of Boost, and IWOMP. He holds a B.Sc from University of Toronto, and a Masters in Mathematics from University of Waterloo.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

HPC User Forum: Sustainability at TACC Points to Software

October 3, 2023

Recently, Dan Stanzione, Executive Director, TACC and Associate Vice President for Research, UT-Austin, gave a presentation on HPC sustainability at the Fall 2023 HPC Users Forum. The complete set of slides is available Read more…

Google’s Controversial AI Chip Paper Under Scrutiny Again 

October 3, 2023

A controversial research paper by Google that claimed the superiority of AI techniques in creating chips is under the microscope for the authenticity of its claims. Science publication Nature is investigating Google's c Read more…

Rust Busting: IBM and Boeing Battle Corrosion with Simulations on Quantum Computer

October 3, 2023

The steady research into developing real-world applications for quantum computing is piling up interesting use cases. Today, IBM reported on work with Boeing to simulate corrosion processes to improve composites used in Read more…

Nvidia Delivering New Options for MLPerf and HPC Performance

September 28, 2023

As HPCwire reported recently, the latest MLperf benchmarks are out. Not unsurprisingly, Nvidia was the leader across many categories. The HGX H100 GPU systems, which contain eight H100 GPUs, delivered the highest throughput on every MLPerf inference test in this round. Read more…

Hakeem Oluseyi Explores His Unlikely Journey from the Street to the Stars in SC23 Keynote

September 28, 2023

Defying the odds In the heart of one of the toughest neighborhoods in the country, young Hakeem Oluseyi’s world was a confined space, but his imagination soared to the stars. While other kids roamed the streets, he Read more…

AWS Solution Channel

Shutterstock 2338659951

VorTech Derisks Innovative Technology to Aid Global Water Sustainability Challenges Using Cloud-Native Simulations on AWS

Overview

No more than 1 percent of the world’s water is readily available fresh water, according to the US Geological Survey. Read more…

QCT Solution Channel

QCT and Intel Codeveloped QCT DevCloud Program to Jumpstart HPC and AI Development

Organizations and developers face a variety of issues in developing and testing HPC and AI applications. Challenges they face can range from simply having access to a wide variety of hardware, frameworks, and toolkits to time spent on installation, development, testing, and troubleshooting which can lead to increases in cost. Read more…

Nvidia Takes Another Shot at Trying to Get AI to Mobile Devices

September 28, 2023

Nvidia takes another shot at trying to get to mobile devices Long before the current situation of Nvidia's GPUs holding AI hostage, the company tried to put its chips in mobile devices but failed. The Tegra mobile chi Read more…

Shutterstock 1927423355

Google’s Controversial AI Chip Paper Under Scrutiny Again 

October 3, 2023

A controversial research paper by Google that claimed the superiority of AI techniques in creating chips is under the microscope for the authenticity of its cla Read more…

Rust Busting: IBM and Boeing Battle Corrosion with Simulations on Quantum Computer

October 3, 2023

The steady research into developing real-world applications for quantum computing is piling up interesting use cases. Today, IBM reported on work with Boeing to Read more…

Nvidia Delivering New Options for MLPerf and HPC Performance

September 28, 2023

As HPCwire reported recently, the latest MLperf benchmarks are out. Not unsurprisingly, Nvidia was the leader across many categories. The HGX H100 GPU systems, which contain eight H100 GPUs, delivered the highest throughput on every MLPerf inference test in this round. Read more…

IonQ Announces 2 New Quantum Systems; Suggests Quantum Advantage is Nearing

September 27, 2023

It’s been a busy week for IonQ, the quantum computing start-up focused on developing trapped-ion-based systems. At the Quantum World Congress today, the compa Read more…

Rethinking ‘Open’ for AI

September 27, 2023

What does “open” mean in the context of AI? Must we accept hidden layers? Do copyrights and patents still hold sway? And do consumers have the right to opt Read more…

Aurora Image

Leveraging Machine Learning in Dark Matter Research for the Aurora Exascale System 

September 25, 2023

Scientists have unlocked many secrets about particle interactions at atomic and subatomic levels. However, one mystery that has eluded researchers is dark matte Read more…

Watsonx Brings AI Visibility to Banking Systems

September 21, 2023

A new set of AI-based code conversion tools is available with IBM watsonx. Before introducing the new "watsonx," let's talk about the previous generation Watson Read more…

Intel’s Gelsinger Lays Out Vision and Map at Innovation 2023 Conference

September 20, 2023

Intel’s sprawling, optimistic vision for the future was on full display yesterday in CEO Pat Gelsinger’s opening keynote at the Intel Innovation 2023 confer Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Contributors

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

ISC 2023 Booth Videos

Cornelis Networks @ ISC23
Dell Technologies @ ISC23
Intel @ ISC23
Lenovo @ ISC23
Microsoft @ ISC23
ISC23 Playlist
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire