James Reinders: Parallelism Has Crossed a Threshold

By Tiffany Trader

February 4, 2016

Is the parallel everything era here? What happens when you can assume parallel cores? In the second half of our in-depth interview, Intel’s James Reinders discusses the eclipsing of single-core machines by their multi- and manycore counterparts and the ramifications of the democratization of parallel computing, remarking “we don’t need to worry about single-core processors anymore and that’s pretty significant in the world of programming for this next decade.” Other topics covered include the intentions behind OpenHPC and trends worth watching in 2016.

HPCwire: Looking back on 2015 what was important with regard to parallel computing? And going forward, what are your top project priorities for 2016?

James Reinders: I’ll start with my long-term perspective. I was reminded that it’s been about a decade now since multicore processors were introduced back in about 2004. We’ve had about a decade of going from ‘multicore processors are new’ to having them everywhere. Now we’re moving into manycore. And that has an effect I don’t think a lot of people talk about. Ten years ago, when I was teaching parallel programming and even five or six years ago, there were still enough single-core machines around that when I talked to people about adding parallelism, if they weren’t in HPC, they had to worry about single-core machines. And I can promise you that the best serial algorithm and the best parallel algorithm — assuming you can do the same thing in parallel — are usually different. And it may be subtle, but it’s usually enough of a headache that a lot of people were left (outside of HPC) having to run a conditional in their program, to say if I’m only running on single-core let’s do this and if I’m in parallel, let’s do it in parallel. And it might be as simple as if-def’ing their OpenMP or not compiling it with OpenMP, but if you wanted it to run on two machines, a single-core and a multicore, you pretty much had to have a parallel version and a non-parallel for a lot of critical things because the parallel program tended to have just a tiny bit of overhead that if run on a serial machine it would slow down.

So I ran into cases 8-10 years ago where someone would implement something in parallel and it would run 40 percent faster on a dual-core machine, but 20 percent slower on a single-core because of the little overhead. Now we don’t have to worry about this anymore, even when I go outside of HPC – HPC’s been parallel for so long – although the node level is kind of doing the same thing in this time period. Seriously, I was in an AT&T store and they were advertising that they had quad-core tablets and I was laughing. There’s also some octo-core things now in that world. I just laughed because it reinforces my point that we don’t need to worry about single-core processors anymore and that’s pretty significant in the world of programming for this next decade, that it doesn’t hold back the stack – we don’t have to program twice in any field anymore. We can just assume parallel cores. I think that that’s a big deal.

I’m a big believer in the democratization of parallel programming and HPC. I think that we keep seeing things that make it more accessible, one of them is having parallel compute everywhere, the other is advancing tools. And I think we saw a couple of things introduced in 2015 towards that democratization that combined with the fact that everything’s parallel is going to be transformative in the upcoming year and decade. A few of them are big things that I was involved with at Intel. One was that we’ve had a pretty successful foray into promoting code modernization. To be honest, I wasn’t so sure myself because I’ve been talking about parallelization so long, that I thought everyone was listening. I think there’s a lot of dialogue left to happen to truly get all of us to understand the ways to utilize parallelism. In our code modernization efforts, we’ve had things ranging from on-site trainings and events to online webinars and tools and they’ve been extraordinarily popular.

I’m also very excited about OpenHPC, and my perspective is as I visit all these different computer centers and I get the wonderful opportunity to have logons on different supercomputers around the world. I can use systems at TACC and Argonne and CSCS in Switzerland and many others, and they all solve similar problems. They all bring together, for the most part, all these different open source packages. They usually have multiple compilers on them; they have a way to allocate parts of the machine. They have a way to determine which version of GCC you’re using along with which version of the Intel compiler, etc. They all solve the same problems, but they all do it differently. There are quite a few people like myself that have logons on multiple supercomputers. So if you talk to scientists doing their work, lots of them have multiple logons and they have to learn each one, but that also means that they aren’t sharing as many BKMs [i.e., best-known methods]. There’s a lot of replication and when you look at the people that are supporting your supercomputer, I think there’s a lot of opportunity to bring more commonality in there and let the staff that you have focus on higher level concerns or newer things. So OpenHPC really excites me because it’s bringing together packages much like these centers already have, and validating them — leaving it with the flexibility that you can pick and choose, but at least giving a base-line that’s validated that they all work together — give a solution to these common problems, even have pre-built binaries.

And I’ve had the good-fortune to sit in on the community sessions, people in HPC that have been debating, and it’s interesting because there have been some pretty heated debates about what are the best way to solve some of these problems, but at the end of the day they may come up with two solutions to a problem or maybe they’ll pick one that’s best. But then they’re kind of solving it industry-wide instead of one compute center at a time, and I think that’s going to help with democratization of supercomputers of HPC. So that got off the ground in 2015, and I think 2016 will be very interesting to see how that evolves. I expect to see more people join it and I expect to see a lot of heated debates about what the best way to solve something is. But these are the sort of debates that have never really happened before because one compute center can have an argument with another computer center what the best way to do something is and they can both go off and do it differently. OpenHPC gives them opportunity for the debate to happen and then maybe stick with one solution that both centers or lots of centers evolve.

I’m also really excited that we got three Knights Landing machines deployed outside of Intel. In 2016, we’ll see that unfold and there is enormous anticipation over Knights Landing, I think it’s very well justified because I think taking this scalable manycore to a processor is going to be a remarkable transformation in the parallel computing field with a very bright future ahead of it.

HPCwire: With regard to OpenHPC, do you expect that more wary associates like IBM, which has been pushing the OpenPOWER ecosystem so strongly, would also be a member? We talked with them and they said they were looking at it for pretty much the reasons you’ve outlined. What are your thoughts about membership?

Reinders: You know I can’t speak for IBM or predict what they are going to do but I do think that the purposes of OpenHPC, the problems they’re solving, would definitely be beneficial to IBM and quite a few other companies and centers that haven’t joined yet. I think a lot of people learned about it at supercomputing so that’s not a surprise. The goals of OpenHPC is certainly to be a true open community group. So the Linux Foundation – it’s their thing, we are heavily involved obviously, and they need to come up with the governance models and so forth, but I can say that it would be an extreme  disappointment if it wasn’t open enough that everybody felt welcome enough to come participate and benefit from it. So I certainly hope to see them participate in 2016, but I think that ball is in their camp.

There’s been some dialogue or debate about whether OpenHPC is Intel’s answer to OpenPower and I don’t think that is the right way to look at it. OpenHPC has the opportunity to bring the entire industry together as opposed to be partisan to one architecture or another. Now that said, our heavy involvement in OpenHPC getting started means that we did what we do best which is our best effort at making sure that there are recipes already written up for our architecture, but hopefully they weren’t written in a way that you can’t just go write one for POWER or any other architecture. That wasn’t the goal, but frankly we’re not experts in other people’s architecture. So hopefully, what we’ve done we’ve left open enough so someone else can come in and if they want to invest effort for their own architecture, do so. It didn’t include the specification of a microprocessor in its design or anything, so it’s definitely different than OpenPower in that respect.

HPCwire: To recap, what are the top five things you are looking forward to in 2016?

Reinders: The top one to me is Knights Landing getting more available beyond the three systems that are out there. I think that’s going to be huge. Having a manycore processor instead of a coprocessor is going to fuel a lot of interesting results and debates, which I think will be great. I think OpenHPC is going to be very interesting, watching how that evolves. Nothing comes for free, so it’s going to be up to the folks that show up to the table in the community contributing, but I think that will be very significant during the year.

The other two areas I look forward to seeing evolve this year are code modernization and big data. I like seeing how we can get better and better at explaining the benefits of parallel computing to a broader set of users and to the users that you already think are doing parallel programming. I think code modernization will continue to stay on the docket as a very important dialogue. And then I think that big data, including data analytics and machine learning, will continue to see very significant developments with more nitty gritty work going on. There have been a lot of demonstrated kernels and some interesting work done, but this year the ramp-up is going to continue very fast. The interest in big data and what it can do for companies is very significant and I think we’ll continue to see a lot of things pop out there.

The other thing I’m following closely is the shift of visualization to the CPU. We’ve had some really interesting work in that area. There’s kind of been an assumption that when you’re doing visualization that having a specialty piece of silicon or GPU to do the visualization must be the answer, but it turns out GPUs are focused on the sort of visualization you need to do to display on the screen, the rasterization. A lot of visualization work is going on on supercomputers and machines where there are a lot of of benefits to not rushing the rasterization so quickly. In particular ray tracing, we’re seeing a lot of use there where ray tracing is clearly much better on the CPU, including Knights Landing. It’s been interesting watching that surprise people. There are a lot of people in the know that are doing visualization on CPUs and finding much higher performance for their purposes. I’ll go out and add that to my list since you asked for five things. I think in 2016 there will be more aha’s and realizations that visualization is increasingly becoming a CPU problem.

HPCwire: What are your thoughts on the National Strategic Computing Initiative to coordinate national efforts to pursue exascale and maximize the benefits of HPC, and should there be more investment in national research centers for software?

Reinders: I do like to point out that hardware is meaningless without software, so yes the software challenges are substantial. If I had any say in it I would encourage us to worry more about the connection of software to the domain experts rather than in a pure computer science fashion. I think there is a lot of interesting work going on in that space. If you had such centers, I would think of them as being applied science, and I think that would be an area of applied science that would be very useful.

As for the National Strategic Computing Initiative, how can I not love it? I think the fate of nations rests on their ability to harness compute power. There’s no doubt about that. Whether we want to be so dramatic to call it a battlefield, it is definitely an area competition. As an American, I’m very glad to see my country not missing that point. I just got back from India, another large democracy, and they are having very similar discussion in their country and they are rolling out their initiatives. Every country has to consider the role that computing, especially high-performance computing, plays in the competition of their nation. The US has been so long a leader in this area, I think our dialogue is about how to continue to lead the world by our own activities.

This was the second part of a two-part interview. To read the first half, where Reinders discusses the architectural trade-offs of Knights Landing’s manycore design and offers advice for expectant users, go here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputer Analysis Shows the Atmospheric Reach of the Tonga Eruption

January 21, 2022

On Saturday, an enormous eruption on the volcanic islands of Hunga Tonga and Hunga Haʻapai shook the Pacific Ocean. The explosion, which could be heard six thousand miles away in Alaska, caused tsunamis across the entir Read more…

NSB Issues US State of Science and Engineering 2022 Report

January 20, 2022

This week the National Science Board released its biannual U.S. State of Science and Engineering 2022 report, as required by the NSF Act. Broadly, the report presents a near-term view of S&E based mostly on 2019 data. To a large extent, this year’s edition echoes trends from the last few reports. The U.S. is still a world leader in R&D spending and S&E education... Read more…

Researchers Achieve 99 Percent Quantum Accuracy with Silicon-Embedded Qubits 

January 20, 2022

Researchers in Australia and the U.S. have made exciting headway in the quantum computing arms race. A multi-institutional team including the University of New South Wales and Sandia National Laboratory announced that th Read more…

Trio of Supercomputers Powers Estimate of Carbon in Earth’s Outer Core

January 20, 2022

Carbon is one of the essential building blocks of life on Earth, and it—along with hydrogen, nitrogen and oxygen—is one of the key elements researchers look for when they search for habitable planets and work to unde Read more…

Multiverse Targets ‘Quantum Computing for the Masses’

January 19, 2022

The race to deliver quantum computing solutions that shield users from the underlying complexity of quantum computing is heating up quickly. One example is Multiverse Computing, a European company, which today launched the second financial services product in its Singularity product group. The new offering, Fair Price, “delivers a higher accuracy in fair price calculations for financial... Read more…

AWS Solution Channel

shutterstock 718231072

Accelerating drug discovery with Amazon EC2 Spot Instances

This post was contributed by Cristian Măgherușan-Stanciu, Sr. Specialist Solution Architect, EC2 Spot, with contributions from Cristian Kniep, Sr. Developer Advocate for HPC and AWS Batch at AWS, Carlos Manzanedo Rueda, Principal Solutions Architect, EC2 Spot at AWS, Ludvig Nordstrom, Principal Solutions Architect at AWS, Vytautas Gapsys, project group leader at the Max Planck Institute for Biophysical Chemistry, and Carsten Kutzner, staff scientist at the Max Planck Institute for Biophysical Chemistry. Read more…

Students at SC21: Out in Front, Alongside and Behind the Scenes

January 19, 2022

The Supercomputing Conference (SC) is one of the biggest international conferences dedicated to high-performance computing, networking, storage and analysis. SC21 was a true ‘hybrid’ conference, with a total of 380 o Read more…

Supercomputer Analysis Shows the Atmospheric Reach of the Tonga Eruption

January 21, 2022

On Saturday, an enormous eruption on the volcanic islands of Hunga Tonga and Hunga Haʻapai shook the Pacific Ocean. The explosion, which could be heard six tho Read more…

NSB Issues US State of Science and Engineering 2022 Report

January 20, 2022

This week the National Science Board released its biannual U.S. State of Science and Engineering 2022 report, as required by the NSF Act. Broadly, the report presents a near-term view of S&E based mostly on 2019 data. To a large extent, this year’s edition echoes trends from the last few reports. The U.S. is still a world leader in R&D spending and S&E education... Read more…

Multiverse Targets ‘Quantum Computing for the Masses’

January 19, 2022

The race to deliver quantum computing solutions that shield users from the underlying complexity of quantum computing is heating up quickly. One example is Multiverse Computing, a European company, which today launched the second financial services product in its Singularity product group. The new offering, Fair Price, “delivers a higher accuracy in fair price calculations for financial... Read more…

Students at SC21: Out in Front, Alongside and Behind the Scenes

January 19, 2022

The Supercomputing Conference (SC) is one of the biggest international conferences dedicated to high-performance computing, networking, storage and analysis. SC Read more…

Q-Ctrl – Tackling Quantum Hardware’s Noise Problems with Software

January 13, 2022

Implementing effective error mitigation and correction is a critical next step in advancing quantum computing. While a lot of attention has been given to effort Read more…

Nvidia Defends Arm Acquisition Deal: a ‘Once-in-a-Generation Opportunity’

January 13, 2022

GPU-maker Nvidia is continuing to try to keep its proposed acquisition of British chip IP vendor Arm Ltd. alive, despite continuing concerns from several governments around the world. In its latest action, Nvidia filed a 29-page response to the U.K. government to point out a list of potential benefits of the proposed $40 billion deal. Read more…

Nvidia Buys HPC Cluster Management Company Bright Computing

January 10, 2022

Graphics chip powerhouse Nvidia today announced that it has acquired HPC cluster management company Bright Computing for an undisclosed sum. Unlike Nvidia’s bid to purchase semiconductor IP company Arm, which has been stymied by regulatory challenges, the Bright deal is a straightforward acquisition that aims to expand... Read more…

SC21 Panel on Programming Models – Tackling Data Movement, DSLs, More

January 6, 2022

How will programming future systems differ from current practice? This is an ever-present question in computing. Yet it has, perhaps, never been more pressing g Read more…

IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits?

November 3, 2021

On October 1 of this year, IonQ became the first pure-play quantum computing start-up to go public. At this writing, the stock (NYSE: IONQ) was around $15 and its market capitalization was roughly $2.89 billion. Co-founder and chief scientist Chris Monroe says it was fun to have a few of the company’s roughly 100 employees travel to New York to ring the opening bell of the New York Stock... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

AMD Launches Milan-X CPU with 3D V-Cache and Multichip Instinct MI200 GPU

November 8, 2021

At a virtual event this morning, AMD CEO Lisa Su unveiled the company’s latest and much-anticipated server products: the new Milan-X CPU, which leverages AMD’s new 3D V-Cache technology; and its new Instinct MI200 GPU, which provides up to 220 compute units across two Infinity Fabric-connected dies, delivering an astounding 47.9 peak double-precision teraflops. “We're in a high-performance computing megacycle, driven by the growing need to deploy additional compute performance... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Nvidia Buys HPC Cluster Management Company Bright Computing

January 10, 2022

Graphics chip powerhouse Nvidia today announced that it has acquired HPC cluster management company Bright Computing for an undisclosed sum. Unlike Nvidia’s bid to purchase semiconductor IP company Arm, which has been stymied by regulatory challenges, the Bright deal is a straightforward acquisition that aims to expand... Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed

November 24, 2021

Details about two previously rumored Chinese exascale systems came to light during last week’s SC21 proceedings. Asked about these systems during the Top500 media briefing on Monday, Nov. 15, list author and co-founder Jack Dongarra indicated he was aware of some very impressive results, but withheld comment when asked directly if he had... Read more…

Leading Solution Providers

Contributors

Lessons from LLVM: An SC21 Fireside Chat with Chris Lattner

December 27, 2021

Today, the LLVM compiler infrastructure world is essentially inescapable in HPC. But back in the 2000 timeframe, LLVM (low level virtual machine) was just getting its start as a new way of thinking about how to overcome shortcomings in the Java Virtual Machine. At the time, Chris Lattner was a graduate student of... Read more…

2021 Gordon Bell Prize Goes to Exascale-Powered Quantum Supremacy Challenge

November 18, 2021

Today at the hybrid virtual/in-person SC21 conference, the organizers announced the winners of the 2021 ACM Gordon Bell Prize: a team of Chinese researchers leveraging the new exascale Sunway system to simulate quantum circuits. The Gordon Bell Prize, which comes with an award of $10,000 courtesy of HPC pioneer Gordon Bell, is awarded annually... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Top500: No Exascale, Fugaku Still Reigns, Polaris Debuts at #12

November 15, 2021

No exascale for you* -- at least, not within the High-Performance Linpack (HPL) territory of the latest Top500 list, issued today from the 33rd annual Supercomputing Conference (SC21), held in-person in St. Louis, Mo., and virtually, from Nov. 14–19. "We were hoping to have the first exascale system on this list but that didn’t happen," said Top500 co-author... Read more…

Nvidia Defends Arm Acquisition Deal: a ‘Once-in-a-Generation Opportunity’

January 13, 2022

GPU-maker Nvidia is continuing to try to keep its proposed acquisition of British chip IP vendor Arm Ltd. alive, despite continuing concerns from several governments around the world. In its latest action, Nvidia filed a 29-page response to the U.K. government to point out a list of potential benefits of the proposed $40 billion deal. Read more…

TACC Unveils Lonestar6 Supercomputer

November 1, 2021

The Texas Advanced Computing Center (TACC) is unveiling its latest supercomputer: Lonestar6, a three peak petaflops Dell system aimed at supporting researchers Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire