James Reinders: Parallelism Has Crossed a Threshold

By Tiffany Trader

February 4, 2016

Is the parallel everything era here? What happens when you can assume parallel cores? In the second half of our in-depth interview, Intel’s James Reinders discusses the eclipsing of single-core machines by their multi- and manycore counterparts and the ramifications of the democratization of parallel computing, remarking “we don’t need to worry about single-core processors anymore and that’s pretty significant in the world of programming for this next decade.” Other topics covered include the intentions behind OpenHPC and trends worth watching in 2016.

HPCwire: Looking back on 2015 what was important with regard to parallel computing? And going forward, what are your top project priorities for 2016?

James Reinders: I’ll start with my long-term perspective. I was reminded that it’s been about a decade now since multicore processors were introduced back in about 2004. We’ve had about a decade of going from ‘multicore processors are new’ to having them everywhere. Now we’re moving into manycore. And that has an effect I don’t think a lot of people talk about. Ten years ago, when I was teaching parallel programming and even five or six years ago, there were still enough single-core machines around that when I talked to people about adding parallelism, if they weren’t in HPC, they had to worry about single-core machines. And I can promise you that the best serial algorithm and the best parallel algorithm — assuming you can do the same thing in parallel — are usually different. And it may be subtle, but it’s usually enough of a headache that a lot of people were left (outside of HPC) having to run a conditional in their program, to say if I’m only running on single-core let’s do this and if I’m in parallel, let’s do it in parallel. And it might be as simple as if-def’ing their OpenMP or not compiling it with OpenMP, but if you wanted it to run on two machines, a single-core and a multicore, you pretty much had to have a parallel version and a non-parallel for a lot of critical things because the parallel program tended to have just a tiny bit of overhead that if run on a serial machine it would slow down.

So I ran into cases 8-10 years ago where someone would implement something in parallel and it would run 40 percent faster on a dual-core machine, but 20 percent slower on a single-core because of the little overhead. Now we don’t have to worry about this anymore, even when I go outside of HPC – HPC’s been parallel for so long – although the node level is kind of doing the same thing in this time period. Seriously, I was in an AT&T store and they were advertising that they had quad-core tablets and I was laughing. There’s also some octo-core things now in that world. I just laughed because it reinforces my point that we don’t need to worry about single-core processors anymore and that’s pretty significant in the world of programming for this next decade, that it doesn’t hold back the stack – we don’t have to program twice in any field anymore. We can just assume parallel cores. I think that that’s a big deal.

I’m a big believer in the democratization of parallel programming and HPC. I think that we keep seeing things that make it more accessible, one of them is having parallel compute everywhere, the other is advancing tools. And I think we saw a couple of things introduced in 2015 towards that democratization that combined with the fact that everything’s parallel is going to be transformative in the upcoming year and decade. A few of them are big things that I was involved with at Intel. One was that we’ve had a pretty successful foray into promoting code modernization. To be honest, I wasn’t so sure myself because I’ve been talking about parallelization so long, that I thought everyone was listening. I think there’s a lot of dialogue left to happen to truly get all of us to understand the ways to utilize parallelism. In our code modernization efforts, we’ve had things ranging from on-site trainings and events to online webinars and tools and they’ve been extraordinarily popular.

I’m also very excited about OpenHPC, and my perspective is as I visit all these different computer centers and I get the wonderful opportunity to have logons on different supercomputers around the world. I can use systems at TACC and Argonne and CSCS in Switzerland and many others, and they all solve similar problems. They all bring together, for the most part, all these different open source packages. They usually have multiple compilers on them; they have a way to allocate parts of the machine. They have a way to determine which version of GCC you’re using along with which version of the Intel compiler, etc. They all solve the same problems, but they all do it differently. There are quite a few people like myself that have logons on multiple supercomputers. So if you talk to scientists doing their work, lots of them have multiple logons and they have to learn each one, but that also means that they aren’t sharing as many BKMs [i.e., best-known methods]. There’s a lot of replication and when you look at the people that are supporting your supercomputer, I think there’s a lot of opportunity to bring more commonality in there and let the staff that you have focus on higher level concerns or newer things. So OpenHPC really excites me because it’s bringing together packages much like these centers already have, and validating them — leaving it with the flexibility that you can pick and choose, but at least giving a base-line that’s validated that they all work together — give a solution to these common problems, even have pre-built binaries.

And I’ve had the good-fortune to sit in on the community sessions, people in HPC that have been debating, and it’s interesting because there have been some pretty heated debates about what are the best way to solve some of these problems, but at the end of the day they may come up with two solutions to a problem or maybe they’ll pick one that’s best. But then they’re kind of solving it industry-wide instead of one compute center at a time, and I think that’s going to help with democratization of supercomputers of HPC. So that got off the ground in 2015, and I think 2016 will be very interesting to see how that evolves. I expect to see more people join it and I expect to see a lot of heated debates about what the best way to solve something is. But these are the sort of debates that have never really happened before because one compute center can have an argument with another computer center what the best way to do something is and they can both go off and do it differently. OpenHPC gives them opportunity for the debate to happen and then maybe stick with one solution that both centers or lots of centers evolve.

I’m also really excited that we got three Knights Landing machines deployed outside of Intel. In 2016, we’ll see that unfold and there is enormous anticipation over Knights Landing, I think it’s very well justified because I think taking this scalable manycore to a processor is going to be a remarkable transformation in the parallel computing field with a very bright future ahead of it.

HPCwire: With regard to OpenHPC, do you expect that more wary associates like IBM, which has been pushing the OpenPOWER ecosystem so strongly, would also be a member? We talked with them and they said they were looking at it for pretty much the reasons you’ve outlined. What are your thoughts about membership?

Reinders: You know I can’t speak for IBM or predict what they are going to do but I do think that the purposes of OpenHPC, the problems they’re solving, would definitely be beneficial to IBM and quite a few other companies and centers that haven’t joined yet. I think a lot of people learned about it at supercomputing so that’s not a surprise. The goals of OpenHPC is certainly to be a true open community group. So the Linux Foundation – it’s their thing, we are heavily involved obviously, and they need to come up with the governance models and so forth, but I can say that it would be an extreme  disappointment if it wasn’t open enough that everybody felt welcome enough to come participate and benefit from it. So I certainly hope to see them participate in 2016, but I think that ball is in their camp.

There’s been some dialogue or debate about whether OpenHPC is Intel’s answer to OpenPower and I don’t think that is the right way to look at it. OpenHPC has the opportunity to bring the entire industry together as opposed to be partisan to one architecture or another. Now that said, our heavy involvement in OpenHPC getting started means that we did what we do best which is our best effort at making sure that there are recipes already written up for our architecture, but hopefully they weren’t written in a way that you can’t just go write one for POWER or any other architecture. That wasn’t the goal, but frankly we’re not experts in other people’s architecture. So hopefully, what we’ve done we’ve left open enough so someone else can come in and if they want to invest effort for their own architecture, do so. It didn’t include the specification of a microprocessor in its design or anything, so it’s definitely different than OpenPower in that respect.

HPCwire: To recap, what are the top five things you are looking forward to in 2016?

Reinders: The top one to me is Knights Landing getting more available beyond the three systems that are out there. I think that’s going to be huge. Having a manycore processor instead of a coprocessor is going to fuel a lot of interesting results and debates, which I think will be great. I think OpenHPC is going to be very interesting, watching how that evolves. Nothing comes for free, so it’s going to be up to the folks that show up to the table in the community contributing, but I think that will be very significant during the year.

The other two areas I look forward to seeing evolve this year are code modernization and big data. I like seeing how we can get better and better at explaining the benefits of parallel computing to a broader set of users and to the users that you already think are doing parallel programming. I think code modernization will continue to stay on the docket as a very important dialogue. And then I think that big data, including data analytics and machine learning, will continue to see very significant developments with more nitty gritty work going on. There have been a lot of demonstrated kernels and some interesting work done, but this year the ramp-up is going to continue very fast. The interest in big data and what it can do for companies is very significant and I think we’ll continue to see a lot of things pop out there.

The other thing I’m following closely is the shift of visualization to the CPU. We’ve had some really interesting work in that area. There’s kind of been an assumption that when you’re doing visualization that having a specialty piece of silicon or GPU to do the visualization must be the answer, but it turns out GPUs are focused on the sort of visualization you need to do to display on the screen, the rasterization. A lot of visualization work is going on on supercomputers and machines where there are a lot of of benefits to not rushing the rasterization so quickly. In particular ray tracing, we’re seeing a lot of use there where ray tracing is clearly much better on the CPU, including Knights Landing. It’s been interesting watching that surprise people. There are a lot of people in the know that are doing visualization on CPUs and finding much higher performance for their purposes. I’ll go out and add that to my list since you asked for five things. I think in 2016 there will be more aha’s and realizations that visualization is increasingly becoming a CPU problem.

HPCwire: What are your thoughts on the National Strategic Computing Initiative to coordinate national efforts to pursue exascale and maximize the benefits of HPC, and should there be more investment in national research centers for software?

Reinders: I do like to point out that hardware is meaningless without software, so yes the software challenges are substantial. If I had any say in it I would encourage us to worry more about the connection of software to the domain experts rather than in a pure computer science fashion. I think there is a lot of interesting work going on in that space. If you had such centers, I would think of them as being applied science, and I think that would be an area of applied science that would be very useful.

As for the National Strategic Computing Initiative, how can I not love it? I think the fate of nations rests on their ability to harness compute power. There’s no doubt about that. Whether we want to be so dramatic to call it a battlefield, it is definitely an area competition. As an American, I’m very glad to see my country not missing that point. I just got back from India, another large democracy, and they are having very similar discussion in their country and they are rolling out their initiatives. Every country has to consider the role that computing, especially high-performance computing, plays in the competition of their nation. The US has been so long a leader in this area, I think our dialogue is about how to continue to lead the world by our own activities.

This was the second part of a two-part interview. To read the first half, where Reinders discusses the architectural trade-offs of Knights Landing’s manycore design and offers advice for expectant users, go here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: September (Part 1)

September 18, 2018

In this new bimonthly feature, HPCwire will highlight newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Check back every Read more…

By Oliver Peckham

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and development. Among other things it would establish a National Quantu Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU--and a refresh of its inference server software packaged as Read more…

By George Leopold

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

A Crystal Ball for HPC

People are notoriously bad at predicting the future.  This very much includes experts. In the Forbes article “Why Most Predictions Are So Bad” Philip Tetlock discusses the largest and best-known test of the accuracy of expert predictions which show that any experts would do better if they make random guesses. Read more…

NSF Highlights Expanded Efforts for Broadening Participation in Computing

September 13, 2018

Today, the Directorate of Computer and Information Science and Engineering (CISE) of the NSF released a letter highlighting the expansion of its broadening participation in computing efforts. The letter was penned by Jam Read more…

By Staff

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU- Read more…

By George Leopold

DeepSense Combines HPC and AI to Bolster Canada’s Ocean Economy

September 13, 2018

We often hear scientists say that we know less than 10 percent of the life of the oceans. This week, IBM and a group of Canadian industry and government partner Read more…

By Tiffany Trader

Rigetti (and Others) Pursuit of Quantum Advantage

September 11, 2018

Remember ‘quantum supremacy’, the much-touted but little-loved idea that the age of quantum computing would be signaled when quantum computers could tackle Read more…

By John Russell

How FPGAs Accelerate Financial Services Workloads

September 11, 2018

While FSI companies are unlikely, for competitive reasons, to disclose their FPGA strategies, James Reinders offers insights into the case for FPGAs as accelerators for FSI by discussing performance, power, size, latency, jitter and inline processing. Read more…

By James Reinders

Update from Gregory Kurtzer on Singularity’s Push into FS and the Enterprise

September 11, 2018

Container technology is hardly new but it has undergone rapid evolution in the HPC space in recent years to accommodate traditional science workloads and HPC systems requirements. While Docker containers continue to dominate in the enterprise, other variants are becoming important and one alternative with distinctly HPC roots – Singularity – is making an enterprise push targeting advanced scale workload inclusive of HPC. Read more…

By John Russell

At HPC on Wall Street: AI-as-a-Service Accelerates AI Journeys

September 10, 2018

AIaaS – artificial intelligence-as-a-service – is the technology discipline that eases enterprise entry into the mysteries of the AI journey while lowering Read more…

By Doug Black

No Go for GloFo at 7nm; and the Fujitsu A64FX post-K CPU

September 5, 2018

It’s been a news worthy couple of weeks in the semiconductor and HPC industry. There were several HPC relevant disclosures at Hot Chips 2018 to whet appetites Read more…

By Dairsie Latimer

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This