A Call to Arms for Parallel Programming Standards

By Nicole Hemsoth

November 16, 2010

Although the parallel programming landscape is relatively young, it’s already easy to get lost in. Beside legacy frameworks like MPI and OpenMP, we now have NVIDIA’s CUDA, OpenCL, Cilk, Intel Threading Building Blocks, Microsoft’s parallel programming extensions for .NET, and a whole gamut of PGAS languages.

And according to Intel’s Tim Mattson, that’s not necessarily a good thing. Mattson, who is a principal engineer (and parallel programming evangelist) at the company’s Visual Applications Research Lab, says all these software frameworks are leading to what he calls “choice overload” and this concerns him greatly.

From his point of view, the road to parallel programming need to be paved with open industry standards. And today then means MPI, OpenMP, and OpenCL. Given that some of Intel’s parallel software offerings such as the Cilk Plus and Array Building Blocks are proprietary, that viewpoint sometimes puts him at odds with his own company. But Mattson’s role as an Intel researcher forces him to look beyond the one or two-year timeframe of product cycles. He’s in it for the long-term, and that means Mattson is looking at what is best for the ecosystem ten years out. “First and foremost, we have to make sure that the right standards exist and they run best on Intel products,” he says.

At SC10 this year, Mattson will be in full software evangelist mode, speaking at seven different tutorials, BoFs and panels on various parallel programming topics. Three of these are geared to fire up the troops for OpenCL, an open standard parallel programming framework for heterogeneous multicore architectures. HPCwire spoke with Mattson shortly before the conference about the importance of open standards, his unapologetic enthusiasm for OpenCL, and his open animosity for the CUDA programming language.

HPCwire: What is the significance of OpenCL and why are you devoting so much time talking about it at SC10?

Tim Mattson: I think OpenCL is perhaps the most important development in the last five, if not the last ten years. The reason I make such an over-the-top statement is that I believe the core to solving the parallel programming challenge is standards. Only an idiot software developer would write code using a propriety API. Since I don’t like to work with idiots (laughs), I want to support good software developers out there by making sure they have the full suite of standards that they need.

So we have message passing covered: MPI. It’s great. We have shared memory covered: OpenMP. It’s great. The glaring hole — because frankly I don’t think any of us saw it coming in the early 2000s — is heterogenous platforms. So we have to fill that hole, and that’s what OpenCL does. So I think it’s incredibly important because now with MPI, OpenMP and OpenCL we’ve got the space covered with these low-level basic programming standards that are required to move things forward.

HPCwire: Well as far as openness goes, NVIDIA’s CUDA programming API is available for any vendor to implement for their particular parallel hardware architecture. For example, AMD could support CUDA for their x86 and GPU platforms. So couldn’t CUDA be adopted as a standard as well?

Mattson: Well, just think about it. I can’t speak for AMD, but why would Intel put resources into an API or language that we have absolutely no say in over how it’s going to evolve? To call CUDA a standard is just insulting. It’s not a standard until the various players can all have a voice in it. It’s ridiculous. If NVIDIA was serious about it, they would create an industry working group that owns the development of CUDA’s API and languages and that has a full voice in what happens with it. Oh, by the way, that’s what OpenCL is.

HPCwire: But there is at least one example of a standard language that emerged from a vendor initiative. Java was controlled by Sun Microsystems for many years and was adopted as a commercial standard because it became so popular across the industry. Don’t you think CUDA could follow that model?

Mattson: Well I know that’s what NVIDIA would like to see happen. And yes, Java is the one instance that would call into question how absolute my statement is. Java though was coming into a very different market and was tightly associated with the Web browser — a platform that cut across the industry. And Sun showed very early on that they were willing to support it as a cross-platform language. They had Java available on x86 and Sparc and showed a willingness to work across the vendors. NVIDIA — rationally, by the way — isn’t doing this with CUDA.

When you look closely at OpenCL, it covers everything CUDA can do and more. OpenCL has all the key vendors and covers a much wider space than CUDA. We’ve got the embedded people, the cell phone vendors, and game vendors all involved. So OpenCL is the right way to go; CUDA is the wrong way to go.

HPCwire: In the high performance computing community, though, there has been criticism that OpenCL doesn’t deliver the kind of performance required for HPC codes. Do you think that’s a fair assessment?

Mattson: That’s a statement that’s both true and false. There’s nothing pathological in the definition of OpenCL that prevents it from being every bit as efficient as CUDA. The thing about OpenCL is that it’s young; it just hasn’t been out very long. So it really comes down to the vendors as far as the quality of their implementations.

I think it’s important for the programmers out there — and let’s face it, they are the end user community for these technologies — to steer things in the right direction by insisting on standards. Look at how MPI and OpenMP came into existence. In both those cases, the user community insisted that these standards be the foundation of the software ecosystem, and the vendors stood behind them. We need people to do the same thing here and not get caught up with point solutions.

If NVIDIA engineers spent as much time optimizing OpenCL, it would run as fast as CUDA. So the performance arguments don’t hold a lot of sway with me, except when someone can say this feature of the language as defined is fundamentally going to be inefficient regardless of the quality of implementation. When people find those, we in the OpenCL group take it very seriously.

We’re roughly on a two-year cadence of coming out with new releases of the OpenCL spec, and we’re very focused on finding the weakness in OpenCL and aggressively evolving the language and to stay right in line with the latest hardware trends.

HPCwire: There are plenty of other languages that address multicore parallelism, some of which have been introduced by Intel. How does OpenCL fit in?

Mattson: Let me be really clear. There are three distinct standards that address multicore. MPI, for example, works great on multicore. OpenMP, if you have a shared address space, works really well too. And OpenCL covers heterogenous architectures. It’s really the trio that I’m pushing and Intel is 100 percent behind them.

On the other hand, yes, there is a trend that I find deeply disturbing of vendors wanting to distinguish themselves by creating new languages and proprietary APIs. It’s disturbing because time spent on a new language or proprietary API is time not spent on improving and establishing these standards. So this is where I’m kind of at odds with some of my colleagues at Intel. That’s just the way it goes.

Let’s face it. Vendors, left to their devices, want people to adopt a proprietary API that lock them into their platform. That’s not bad. NVIDIA is completely rational in wanting to lock people into their platform with CUDA. If I was working at NVIDIA, I’d probably be trying to do the same think. I think it’s up to the user community to refuse to let users get away with that game. They can do that by insisting on standards or open source solutions.

The three standards I mentioned are where I think most of the resource should go. But Intel did release Task Building Blocks –TBB — as open source. That was a very responsible thing to do. I was very excited, as was the TBB team, when that happened.

HPCwire: Another language is Ct, which started as a research project at Intel and has now been commercialized. How does that fit in to this parallel language ecosystem?

Mattson: Ct, which by the way, is now called Array Building Blocks, is a higher level abstraction of parallelism. While I’m a huge supporter of what they are doing with Array Building Blocks, [as for] how useful it will be in the marketplace, I’m not sure because it is proprietary. But I think some of the optimizations it does under the covers is very important. There are a lot of really important things about that project.

But I think we should distinguish creating good technologies versus confusing the market by having too many options out there. In 2004, if you wanted to do parallel programming you had Windows threads, pthreads on Linux, OpenMP, and MPI. Five options was fine. Now there are a dozen or more parallel programming languages out there. So I think we’re losing ground. I think choice overload is real. And that concerns me deeply.

HPCwire: Do you think parallelizing established languages like Java and Python is a positive development?

Mattson: Let me tell you where I think things are going and where we’ll be in 10 years. The question is do we get there cleanly or do we get there with messy detours along the way. Ultimately I think we have to raise the level of abstraction, which is what you see with these efforts around building parallelism into Python. We need to focus on the higher level frameworks that people are increasingly using to write software.

This is really what I spend the bulk of my time doing in my personal research, and with a group at UC Berkeley — to define patterned languages from which we can derive the frameworks, which then map down to the lower-level languages. I really want to make it so that only a small number of performance-oriented, efficiency layer programmers worry about these low-level languages — OpenCL, MPI, OpenMP, or TBB. But beyond that, people need to have some higher level framework they can work with. A parallel Python project like Copperhead is such an example. I’m very excited about it because I think that’s clearly the direction things are moving.

I learned this most clearly looking at the gaming industry, because that industry has been the leader in adopting multicore, and I mean adopting multicore as a successful business venture. Researchers have been playing with it for a long time, but in terms of creating multicore software, selling it, and building a profitable businesses around it, the gaming industry has led the charge.

They have these separations of concerns very sharply defined, and it works extremely well for them. Most of their programmers work in a higher level scripting language or with collections of libraries written in C++. And then they have a small number of “technology programmers” which are on the order of 10 percent of their software developers. They’re the guys who do the low-level stuff. And I think that kind of separation of concerns is what’s absolutely critical.

HPCwire: So you think higher level frameworks will be key to enabling these low-level parallel programming APIs you’re talking about?

Mattson: When we were sitting around creating OpenCL, we explicitly talked about that as our goal. In fact, you’ll find some places in the spec where we describe OpenCL as a hardware abstraction layer. We’re perfectly aware that OpenCL is obnoxiously low-level. It exposes so many details of the underlying platform. We achieve extreme portability by exposing everything and abstracting as little as we can. The reason we think that’s the right thing to do is because we view OpenCL ultimately as being a target for higher level frameworks. It’s young, so those higher level frameworks don’t exist yet, but I think they will and I think that will be the long range trend, not just for OpenCL, but for all these parallel languages.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary technology that even established events focusing on HPC specific Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be natively integrated into four of the world's most advanced qu Read more…

Computing-Driven Medicine: Sleeping Better with HPC

September 10, 2024

As a senior undergraduate student at Fisk University in Nashville, Tenn., Ifrah Khurram's calculus professor, Dr. Sanjukta Hota, encouraged her to apply for the Sustainable Research Pathways Program (SRP). SRP was create Read more…

LLNL Engineers Harness Machine Learning to Unlock New Possibilities in Lattice Structures

September 9, 2024

Lattice structures, characterized by their complex patterns and hierarchical designs, offer immense potential across various industries, including automotive, aerospace, and biomedical engineering. With their outstand Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, integrated, and secured data. Now scientists working at univer Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently posted the following on X/Twitter: "This weekend, the @xA Read more…

Shutterstock 793611091

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary tech Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be n Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, in Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Shutterstock 1897494979

What’s New with Chapel? Nine Questions for the Development Team

September 4, 2024

HPC news headlines often highlight the latest hardware speeds and feeds. While advances on the hardware front are important, improving the ability to write soft Read more…

Critics Slam Government on Compute Speeds in Regulations

September 3, 2024

Critics are accusing the U.S. and state governments of overreaching by including limits on compute speeds in regulations and laws, which they claim will limit i Read more…

Shutterstock 1622080153

AWS Perfects Cloud Service for Supercomputing Customers

August 29, 2024

Amazon's AWS believes it has finally created a cloud service that will break through with HPC and supercomputing customers. The cloud provider a Read more…

HPC Debrief: James Walker CEO of NANO Nuclear Energy on Powering Datacenters

August 27, 2024

Welcome to The HPC Debrief where we interview industry leaders that are shaping the future of HPC. As the growth of AI continues, finding power for data centers Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Leading Solution Providers

Contributors

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Department of Justice Begins Antitrust Probe into Nvidia

August 9, 2024

After months of skyrocketing stock prices and unhinged optimism, Nvidia has run into a few snags – a  design flaw in one of its new chips and an antitrust pr Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire