A Comparison of Heterogeneous and Manycore Programming Models

By Yonghong Yan, Barbara M. Chapman and Michael Wong

March 2, 2015

The high performance computing (HPC) community is heading toward the era of exascale machines, expected to exhibit an unprecedented level of complexity and size. The community agrees that the biggest challenges to future application performance lie with efficient node-level execution that can use all the resources in the node. These nodes might be comprised of many identical compute cores in multiple coherency domains, or they may be heterogeneous, and contain specialized cores that perform a restricted set of operations with high efficiency. In general, heterogeneity and manycore processors are both expected to be common. Although we anticipate physically shared memory within each node, access speeds will vary considerably between cores and between types of memory, imposing deeper memory hierarchies and more challenging NUMA effects for performance optimizations. Further, the node may present distinct memory address spaces to different computing elements, as demonstrated in today’s accelerator architectures, making explicit data movement necessary.

A critical challenge for using the massive parallel resources is the provision of programming models that facilitate the expression of the required levels of concurrency to exploit all of the hardware resources in the node, while permitting an efficient implementation by the system software stack. Node-level parallel models range from threading primitives such as pthreads, C++11 threads and the Boost thread library for CPU/SMPs, and low-level models for manycore accelerators such as proprietary CUDA from NVIDIA and open standard OpenCL, to high-level models including directive-based programming models such as OpenMP* and OpenACC*, the latter of which was started to support GPU accelerators; Microsoft Visual C++ parallel programming on Windows platforms and specifically tailored for C++; and other options such as Cilkplus, TBB and vector primitives.

A programming model sits between the application and the hardware architecture. Languages’ features will either need to virtualize certain hardware capabilities, or to simplify the representation of the algorithms and parallelism patterns that the application uses. The design of language features and interfaces must agree on the details of the abstractions. In this article, we summarize a comprehensive list of features for parallel programming models to support recent and future heterogeneous and manycore architectures. We then make comparisons of several programming models against these features. [For additional background and relevance, we refer you to the work of our colleague Michael Wolfe who writes about parallel programming concepts here and discusses multiple levels of parallelism here and here.]

Features

The list of features and their categories are inspired and developed from the execution model of Habanero Extreme Scale Software Research Project at Rice University directed by Vivek Sarkar.

Parallelism: A model should allow users to specify different kinds of parallelism that could easily be mapped to parallel architectures and that facilitate expression of parallel algorithms. At least four parallelism mechanisms should be considered for comparison: 1) Data parallelism (e.g., a parallel loop nest), which typically maps well to manycore accelerators and vector architectures depending on the granularity of each data parallel unit; 2) Asynchronous task parallelism, which easily expresses certain parallel algorithms, e.g., irregular and recursive parallelism; 3) Data/event-driven computation, which captures computations characterized as data flow rather than control flow; and 4) Parallelism on host and/or device. Recent accelerator-based architectures attach computational devices as coprocessors and rely on offloading model to exploit their capabilities. This category differentiates those models that support parallelism only on host and/or parallelism on accelerators.

Architecture abstraction and data/computation binding: Optimizing parallel applications on shared memory ccNUMA machines is challenging. The effects of cache coherence, e.g., false sharing, and NUMA complexity impact application performance in ways that varies widely across systems. With recent architectures exhibiting deeper memory hierarchies and possible distinct memory/address spaces, the issue becomes more challenging. A programming model could help in this aspect by providing: 1) architecture abstractions, e.g., an “explicit” notion of NUMA memory regions that matter for performance; 2) syntax to support binding of computation and data by users to control or to influence runtime behavior favoring the principle of locality; or 3) means to specify explicit data mapping and movement for sharing data between different memory and address spaces.

Synchronizations: A programming model should provide constructs for supporting coordination between various parallel work units, for example barrier, reduction and join operations for synchronizing parallel threads or tasks, point-to-point signal and wait operations to create pipeline or workflow executions of parallel tasks, and phase-based synchronization for streaming computations.

Mutual exclusion: Interfaces such as locks and mutexes are still widely used for protected data access. A model should provide language constructs for easily creating exclusive data access mechanism needed for parallel programming, and should define appropriate semantics for mutual exclusion to reduce the opportunities of introducing deadlocks. Architectural changes such as transactional memory provide alternatives to achieve similar data protection, which could also be part of the interface of a parallel model.

Other features: Error handling, tools support, and multiple language/library bindings are also important features for parallel programming. Error handling provides support for dealing with faults from the user program or the system to improve system and application resilience. Support for tools, e.g., performance profiling and debugging tools, is essential to improve the productivity of parallel application development and performance tuning. For parallel high performance computing, C, C++ and Fortran are still the dominant base languages. While functional languages can provide a cleaner abstraction for concurrency, it is not easy to rewrite all legacy code and library to a new base language. Ideally, a model would support at least these three languages.

We have used these features to compare a list of commonly used node-level programming models for parallel and high performance computing that have commercial implementations, including OpenMP, Intel Cilkplus, Intel TBB, OpenACC, Nvidia CUDA, OpenCL, C++11 and pthreads. Pthreads and C++11, which was extended to support multithreading with threads, were chosen as baseline languages and library that provide core functionalities to enable other high-level language features. CUDA (only for NVIDIA GPU) and OpenCL are considered as low-level programming interfaces for recent manycore and accelerator architectures that can be used as user-level programming interfaces or intermediate-level interfaces for the compiler-transformation targets of high-level interfaces. The recent OpenACC standard created as a high-level interface for manycore accelerators helps users gain early experience of directive-based interfaces. Intel TBB and Cilkplus are task based parallel programming models used on multi-core and shared memory systems that have quality implementations and commercial support as well as open-source implementations. OpenMP is a comprehensive, well-developed standard that has been driven by industry, government labs and academia. It has multiple commercial and quality open-source implementations supporting hardware from many vendors and much existing scientific code is already using it.

Comparisons

The comparisons are shown in Figure 1 and 2. For parallelism support, asynchronous tasking or threading is still the foundational parallel mechanism that is supported by all the models, and data parallelism (such as OpenMP worksharing) can be implemented using the basic asynchronous tasking and join synchronization. Overall, OpenMP provides the most comprehensive set of features to support a wide variety of parallelism patterns and architectures, on both host and devices, while others concentrate on support parallelism on either host or device only. For accelerators such as NVIDIA GPUs, OpenACC and CUDA provide language constructs that support these parallelism patterns. For architectural abstraction, only OpenMP provides constructs to model memory hierarchy (as places) and the binding of computation with data (proc_bind clause). Each of the programming models that support manycore architectures has its own way of organizing the massive threading capabilities (x1000) into a multiple-level thread hierarchy, e.g., OpenMP’s teams of threads, OpenACC’s gang/worker/vector clause, CUDA’s blocks/threads and OpenCL’s work groups. Models that support devices and offloading computation provide constructs to specify data movement between discrete memory spaces, models that do not support other compute devices do not require them.

Comparison models Fig1

Figure 1: Comparison of heterogeneous and manycore programming models – Parallelism patterns and Architecture abstractions and data/computation binding

In Figure 2, we show the feature comparison in other categories. Of the three commonly used synchronization operations, i.e., barrier, reduction and join operation, only OpenMP supports all of them. Note that since Cilk and Intel TBB emphasize tasks rather than threads, the concept of a thread barrier makes little sense in their model, so its omission is not a problem. Locks and mutexes are still the most widely used mechanism for providing mutual exclusion. Most of the models have C and C++ bindings, but only OpenMP and OpenACC have Fortran bindings. Most models do not provide dedicated mechanisms for error handling and many leverage C++ exceptions for that purpose. As an exception, OpenMP has its “cancel” construct for this purpose, which supports an emerging error model. For tools support, Cilkplus, CUDA, and OpenMP are three implementations that provide a dedicated tool interface or software. Many of the “host only” models can use standard system profiling tools such as Linux perf. In some cases vendor or third party profiling tools also have explicit support for OpenMP analyses.

Comparison models Fig2

Figure 2: Comparison of heterogeneous and manycore programming models – Synchronizations, Mutual exclusions, Language binding, Error handing and Tool support

Conclusion

From the comparison, OpenMP clearly supports the most comprehensive set of features. It has evolved rapidly such that now it supports the emerging heterogeneous and manycore architectures including accelerators, as well as the conventional shared memory SMP, NUMA and multicore systems. The OpenMP Architecture Review Board’s new Mission Statement, “Standardize directive-based multi-language high-level parallelism that is performant, productive and portable,” indicates a clear direction to support a broad form of parallelism beyond HPC workloads. The unique directive-based approach of OpenMP, which the OpenACC model for accelerators also borrows, enables a productive parallel programming model that significantly reduce migration and porting efforts for applications since it does not require that they be rewritten in a new language. OpenMP is the only directive based specification that allows the exploitation of the parallelism available in both the multiple CPUs in a host node and in attached processors using a single language.

While a specification, whether a de facto standard or a formal standard, defines the interfaces for writing parallel programs, it is only used and adopted when there are quality implementations. Many more challenges for implementing a high-level programming model such as OpenMP and OpenACC exist than those for realizing models that are only library-based (TBB and pthreads) or consist of a small set of extensions to standard languages (OpenCL, CUDA and Cilkplus). To the best of our knowledge at the time of writing, the latest OpenACC standard, version 2.0, has commercial implementations from PGI and Cray for NVIDIA GPU architectures. The latest OpenMP standard already has partial support in the latest (or beta) GNU compiler, Oracle Solaris Studio, and Intel Parallel Studio. There is sustained effort in implementing full OpenMP support in the Clang/LLVM compiler to intersect the arrival of the standard. Pathscale is also working aggressively to release a compiler in the near future that supports the latest version of both OpenACC and OpenMP.

The choice of parallel model for a particular application and/or hardware architecture depends on the programmability and portability of the model as well as the performance delivered to users by the implementations. For example, GPGPU accelerator support in high-level programming interfaces, one of the urgently needed features of parallel node-level programming, is now available in both OpenACC and OpenMP, with OpenACC being developed earlier and with more existing compiler support. However, a wide variety of users still use the proprietary CUDA model despite its productivity challenges because it currently delivers higher performance than the high-level programming models in the places where it is available. Thus the existence of multiple programming models, each having its own unique set of features that serve the specific needs of users and applications, and each having different degree of tradeoff between productivities and performance, is still necessary.

About the Authors

YonghongYanDr. Yonghong Yan is an Assistant Professor from the Oakland University, an OpenMP Architectural Review Board (ARB) representative, and chair of OpenMP Interoperability language subcommittee. Starting from his Ph.D. study, Yonghong has been working extensively on multiple compiler/runtime projects, including the recent OpenMP and OpenACC compiler based on OpenUH/Open64, and the Habanero-C (X10-dialect) compiler and PACE compiler based on ROSE/LLVM when he was a postdoc at Rice University.

 

 

BarbaraChapmanBarbara Chapman is a Professor of Computer Science at the University of Houston, Texas, where she also directs the Center for Advanced Computing and Data Systems. Chapman has performed research on parallel programming languages and related implementation technology for over 20 years and has been involved in the OpenMP directive-based programming standard development since 2001. She also contributes to the OpenSHMEM and OpenACC programming standards efforts. Her research group has developed OpenUH, a state-of-the-art open source compiler that is used to explore language, compiler and runtime techniques, with a special focus on multi-threaded programming. Dr. Chapman’s research also explores optimization of partitioned global address space programs, strategies for runtime code optimizations, compiler-tools interactions and high-level programming models for embedded systems.

MichaelWongMichael Wong is the CEO of the OpenMP Corporation, a consortium of 26 member companies that hold the de-facto standard for parallel programming specification for C/C++ and FORTRAN. He is the IBM and Canadian Head of delegation to the C++ Standard, and Chair of the WG21 Transactional Memory group. He is the co-author of a number of C++/OpenMP/TM features and patents. He is the past C++ team lead to IBM´s XL C++ compiler, C compiler and has been designing C++ compilers for twenty years.  Currently, he is leading the C++11 deployment as a senior technical lead for IBM. His current research interest is in the area of parallel programming, C++ benchmark performance, object model, generic programming and template metaprogramming. He is a frequent speaker at various technical conferences and serves on the Programming Committee of Boost, and IWOMP. He holds a B.Sc from University of Toronto, and a Masters in Mathematics from University of Waterloo.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at current count) across the European Union and supplanting HPC Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for high-performance computing, a newly created position that is a Read more…

By Tiffany Trader

Swiss Supercomputer Enables Ultra-Precise Climate Simulations

September 17, 2020

As smoke from the record-breaking West Coast wildfires pours across the globe and tropical storms continue to form at unprecedented rates, the state of the global climate is once again looming in the public eye. Owing to Read more…

By Oliver Peckham

Future of Fintech on Display at HPC + AI Wall Street

September 17, 2020

Those who tuned in for Tuesday's HPC + AI Wall Street event got a peak at the future of fintech and lively discussion of topics like blockchain, AI for risk management, and high-frequency trading, as told by a group of l Read more…

By Alex Woodie,Tiffany Trader and Todd R. Weiss

Legacy HPC System Seeds Supercomputing Excellence at UT Dallas

September 16, 2020

What happens to supercomputers after their productive life at an academic research center ends? The question often arises when people hear that the average age of a top supercomputer at retirement is about five years. Rest assured — systems aren’t simply scrapped. Instead, they’re donated to organizations and institutions that can make... Read more…

By Aaron Dubrow

AWS Solution Channel

Next-generation aerospace modeling and simulation: benchmarking Amazon Web Services High Performance Computing services

The aerospace industry has been using Computational Fluid Dynamics (CFD) for decades to create and optimize designs digitally, from the largest passenger planes and fighter jets to gliders and drones. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

IBM’s Quantum Race to One Million Qubits

September 15, 2020

IBM today outlined its ambitious quantum computing technology roadmap at its virtual Quantum Summit. The eye-popping million qubit number is still far out, agrees IBM, but perhaps not that far out. Just as eye-popping is IBM’s nearer-term plan for a 1,000-plus qubit system named Condor... Read more…

By John Russell

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Future of Fintech on Display at HPC + AI Wall Street

September 17, 2020

Those who tuned in for Tuesday's HPC + AI Wall Street event got a peak at the future of fintech and lively discussion of topics like blockchain, AI for risk man Read more…

By Alex Woodie,Tiffany Trader and Todd R. Weiss

IBM’s Quantum Race to One Million Qubits

September 15, 2020

IBM today outlined its ambitious quantum computing technology roadmap at its virtual Quantum Summit. The eye-popping million qubit number is still far out, agrees IBM, but perhaps not that far out. Just as eye-popping is IBM’s nearer-term plan for a 1,000-plus qubit system named Condor... Read more…

By John Russell

Nvidia Commits to Buy Arm for $40B

September 14, 2020

Nvidia is acquiring semiconductor design company Arm Ltd. for $40 billion from SoftBank in a blockbuster deal that catapults the GPU chipmaker to a dominant position in the datacenter while helping troubled SoftBank reverse its financial woes. The deal, which has been rumored for... Read more…

By Todd R. Weiss and George Leopold

AMD’s Massive COVID-19 HPC Fund Adds 18 Institutions, 5 Petaflops of Power

September 14, 2020

Almost exactly five months ago, AMD announced its COVID-19 HPC Fund, an ongoing flow of resources and equipment to research institutions studying COVID-19 that began with an initial donation of $15 million. In June, AMD announced major equipment donations to several major institutions. Now, AMD is making its third major COVID-19 HPC Fund... Read more…

By Oliver Peckham

HPC Strategist Dave Turek Joins DNA Storage (and Computing) Company Catalog

September 11, 2020

You've heard the saying "flash is the new disk and disk is the new tape," which traces its origins back to Jim Gray*. But what if DNA-based data storage could o Read more…

By Tiffany Trader

Google’s Quantum Chemistry Simulation Suggests Promising Path Forward

September 9, 2020

A much-anticipated prize in quantum computing is the ability to more accurately model chemical bonding behavior. Doing so should lead to better chemical synthes Read more…

By John Russell

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

Japan’s Fugaku Tops Global Supercomputing Rankings

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the To Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This