A Call to Arms for Parallel Programming Standards

By Nicole Hemsoth

November 16, 2010

Although the parallel programming landscape is relatively young, it’s already easy to get lost in. Beside legacy frameworks like MPI and OpenMP, we now have NVIDIA’s CUDA, OpenCL, Cilk, Intel Threading Building Blocks, Microsoft’s parallel programming extensions for .NET, and a whole gamut of PGAS languages.

And according to Intel’s Tim Mattson, that’s not necessarily a good thing. Mattson, who is a principal engineer (and parallel programming evangelist) at the company’s Visual Applications Research Lab, says all these software frameworks are leading to what he calls “choice overload” and this concerns him greatly.

From his point of view, the road to parallel programming need to be paved with open industry standards. And today then means MPI, OpenMP, and OpenCL. Given that some of Intel’s parallel software offerings such as the Cilk Plus and Array Building Blocks are proprietary, that viewpoint sometimes puts him at odds with his own company. But Mattson’s role as an Intel researcher forces him to look beyond the one or two-year timeframe of product cycles. He’s in it for the long-term, and that means Mattson is looking at what is best for the ecosystem ten years out. “First and foremost, we have to make sure that the right standards exist and they run best on Intel products,” he says.

At SC10 this year, Mattson will be in full software evangelist mode, speaking at seven different tutorials, BoFs and panels on various parallel programming topics. Three of these are geared to fire up the troops for OpenCL, an open standard parallel programming framework for heterogeneous multicore architectures. HPCwire spoke with Mattson shortly before the conference about the importance of open standards, his unapologetic enthusiasm for OpenCL, and his open animosity for the CUDA programming language.

HPCwire: What is the significance of OpenCL and why are you devoting so much time talking about it at SC10?

Tim Mattson: I think OpenCL is perhaps the most important development in the last five, if not the last ten years. The reason I make such an over-the-top statement is that I believe the core to solving the parallel programming challenge is standards. Only an idiot software developer would write code using a propriety API. Since I don’t like to work with idiots (laughs), I want to support good software developers out there by making sure they have the full suite of standards that they need.

So we have message passing covered: MPI. It’s great. We have shared memory covered: OpenMP. It’s great. The glaring hole — because frankly I don’t think any of us saw it coming in the early 2000s — is heterogenous platforms. So we have to fill that hole, and that’s what OpenCL does. So I think it’s incredibly important because now with MPI, OpenMP and OpenCL we’ve got the space covered with these low-level basic programming standards that are required to move things forward.

HPCwire: Well as far as openness goes, NVIDIA’s CUDA programming API is available for any vendor to implement for their particular parallel hardware architecture. For example, AMD could support CUDA for their x86 and GPU platforms. So couldn’t CUDA be adopted as a standard as well?

Mattson: Well, just think about it. I can’t speak for AMD, but why would Intel put resources into an API or language that we have absolutely no say in over how it’s going to evolve? To call CUDA a standard is just insulting. It’s not a standard until the various players can all have a voice in it. It’s ridiculous. If NVIDIA was serious about it, they would create an industry working group that owns the development of CUDA’s API and languages and that has a full voice in what happens with it. Oh, by the way, that’s what OpenCL is.

HPCwire: But there is at least one example of a standard language that emerged from a vendor initiative. Java was controlled by Sun Microsystems for many years and was adopted as a commercial standard because it became so popular across the industry. Don’t you think CUDA could follow that model?

Mattson: Well I know that’s what NVIDIA would like to see happen. And yes, Java is the one instance that would call into question how absolute my statement is. Java though was coming into a very different market and was tightly associated with the Web browser — a platform that cut across the industry. And Sun showed very early on that they were willing to support it as a cross-platform language. They had Java available on x86 and Sparc and showed a willingness to work across the vendors. NVIDIA — rationally, by the way — isn’t doing this with CUDA.

When you look closely at OpenCL, it covers everything CUDA can do and more. OpenCL has all the key vendors and covers a much wider space than CUDA. We’ve got the embedded people, the cell phone vendors, and game vendors all involved. So OpenCL is the right way to go; CUDA is the wrong way to go.

HPCwire: In the high performance computing community, though, there has been criticism that OpenCL doesn’t deliver the kind of performance required for HPC codes. Do you think that’s a fair assessment?

Mattson: That’s a statement that’s both true and false. There’s nothing pathological in the definition of OpenCL that prevents it from being every bit as efficient as CUDA. The thing about OpenCL is that it’s young; it just hasn’t been out very long. So it really comes down to the vendors as far as the quality of their implementations.

I think it’s important for the programmers out there — and let’s face it, they are the end user community for these technologies — to steer things in the right direction by insisting on standards. Look at how MPI and OpenMP came into existence. In both those cases, the user community insisted that these standards be the foundation of the software ecosystem, and the vendors stood behind them. We need people to do the same thing here and not get caught up with point solutions.

If NVIDIA engineers spent as much time optimizing OpenCL, it would run as fast as CUDA. So the performance arguments don’t hold a lot of sway with me, except when someone can say this feature of the language as defined is fundamentally going to be inefficient regardless of the quality of implementation. When people find those, we in the OpenCL group take it very seriously.

We’re roughly on a two-year cadence of coming out with new releases of the OpenCL spec, and we’re very focused on finding the weakness in OpenCL and aggressively evolving the language and to stay right in line with the latest hardware trends.

HPCwire: There are plenty of other languages that address multicore parallelism, some of which have been introduced by Intel. How does OpenCL fit in?

Mattson: Let me be really clear. There are three distinct standards that address multicore. MPI, for example, works great on multicore. OpenMP, if you have a shared address space, works really well too. And OpenCL covers heterogenous architectures. It’s really the trio that I’m pushing and Intel is 100 percent behind them.

On the other hand, yes, there is a trend that I find deeply disturbing of vendors wanting to distinguish themselves by creating new languages and proprietary APIs. It’s disturbing because time spent on a new language or proprietary API is time not spent on improving and establishing these standards. So this is where I’m kind of at odds with some of my colleagues at Intel. That’s just the way it goes.

Let’s face it. Vendors, left to their devices, want people to adopt a proprietary API that lock them into their platform. That’s not bad. NVIDIA is completely rational in wanting to lock people into their platform with CUDA. If I was working at NVIDIA, I’d probably be trying to do the same think. I think it’s up to the user community to refuse to let users get away with that game. They can do that by insisting on standards or open source solutions.

The three standards I mentioned are where I think most of the resource should go. But Intel did release Task Building Blocks –TBB — as open source. That was a very responsible thing to do. I was very excited, as was the TBB team, when that happened.

HPCwire: Another language is Ct, which started as a research project at Intel and has now been commercialized. How does that fit in to this parallel language ecosystem?

Mattson: Ct, which by the way, is now called Array Building Blocks, is a higher level abstraction of parallelism. While I’m a huge supporter of what they are doing with Array Building Blocks, [as for] how useful it will be in the marketplace, I’m not sure because it is proprietary. But I think some of the optimizations it does under the covers is very important. There are a lot of really important things about that project.

But I think we should distinguish creating good technologies versus confusing the market by having too many options out there. In 2004, if you wanted to do parallel programming you had Windows threads, pthreads on Linux, OpenMP, and MPI. Five options was fine. Now there are a dozen or more parallel programming languages out there. So I think we’re losing ground. I think choice overload is real. And that concerns me deeply.

HPCwire: Do you think parallelizing established languages like Java and Python is a positive development?

Mattson: Let me tell you where I think things are going and where we’ll be in 10 years. The question is do we get there cleanly or do we get there with messy detours along the way. Ultimately I think we have to raise the level of abstraction, which is what you see with these efforts around building parallelism into Python. We need to focus on the higher level frameworks that people are increasingly using to write software.

This is really what I spend the bulk of my time doing in my personal research, and with a group at UC Berkeley — to define patterned languages from which we can derive the frameworks, which then map down to the lower-level languages. I really want to make it so that only a small number of performance-oriented, efficiency layer programmers worry about these low-level languages — OpenCL, MPI, OpenMP, or TBB. But beyond that, people need to have some higher level framework they can work with. A parallel Python project like Copperhead is such an example. I’m very excited about it because I think that’s clearly the direction things are moving.

I learned this most clearly looking at the gaming industry, because that industry has been the leader in adopting multicore, and I mean adopting multicore as a successful business venture. Researchers have been playing with it for a long time, but in terms of creating multicore software, selling it, and building a profitable businesses around it, the gaming industry has led the charge.

They have these separations of concerns very sharply defined, and it works extremely well for them. Most of their programmers work in a higher level scripting language or with collections of libraries written in C++. And then they have a small number of “technology programmers” which are on the order of 10 percent of their software developers. They’re the guys who do the low-level stuff. And I think that kind of separation of concerns is what’s absolutely critical.

HPCwire: So you think higher level frameworks will be key to enabling these low-level parallel programming APIs you’re talking about?

Mattson: When we were sitting around creating OpenCL, we explicitly talked about that as our goal. In fact, you’ll find some places in the spec where we describe OpenCL as a hardware abstraction layer. We’re perfectly aware that OpenCL is obnoxiously low-level. It exposes so many details of the underlying platform. We achieve extreme portability by exposing everything and abstracting as little as we can. The reason we think that’s the right thing to do is because we view OpenCL ultimately as being a target for higher level frameworks. It’s young, so those higher level frameworks don’t exist yet, but I think they will and I think that will be the long range trend, not just for OpenCL, but for all these parallel languages.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

HPE Extreme Performance Solutions

Object Storage is the Ideal Storage Method for CME Companies

The communications, media, and entertainment (CME) sector is experiencing a massive paradigm shift driven by rising data volumes and the demand for high-performance data analytics. Read more…

Weekly Twitter Roundup (Feb. 16, 2017)

February 16, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

Alexander Named Dep. Dir. of Brookhaven Computational Initiative

February 15, 2017

Francis Alexander, a physicist with extensive management and leadership experience in computational science research, has been named Deputy Director of the Computational Science Initiative at the U.S. Read more…

Here’s What a Neural Net Looks Like On the Inside

February 15, 2017

Ever wonder what the inside of a machine learning model looks like? Today Graphcore released fascinating images that show how the computational graph concept maps to a new graph processor and graph programming framework it’s creating. Read more…

By Alex Woodie

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Cray Posts Best-Ever Quarter, Visibility Still Limited

February 10, 2017

On its Wednesday earnings call, Cray announced the largest revenue quarter in the company’s history and the second-highest revenue year. Read more…

By Tiffany Trader

HPC Cloud Startup Launches ‘App Store’ for HPC Workflows

February 9, 2017

“Civilization advances by extending the number of important operations which we can perform without thinking about them,” Read more…

By Tiffany Trader

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

KNUPATH Hermosa-based Commercial Boards Expected in Q1 2017

December 15, 2016

Last June tech start-up KnuEdge emerged from stealth mode to begin spreading the word about its new processor and fabric technology that’s been roughly a decade in the making. Read more…

By John Russell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This