James Reinders: Parallelism Has Crossed a Threshold

By Tiffany Trader

February 4, 2016

Is the parallel everything era here? What happens when you can assume parallel cores? In the second half of our in-depth interview, Intel’s James Reinders discusses the eclipsing of single-core machines by their multi- and manycore counterparts and the ramifications of the democratization of parallel computing, remarking “we don’t need to worry about single-core processors anymore and that’s pretty significant in the world of programming for this next decade.” Other topics covered include the intentions behind OpenHPC and trends worth watching in 2016.

HPCwire: Looking back on 2015 what was important with regard to parallel computing? And going forward, what are your top project priorities for 2016?

James Reinders: I’ll start with my long-term perspective. I was reminded that it’s been about a decade now since multicore processors were introduced back in about 2004. We’ve had about a decade of going from ‘multicore processors are new’ to having them everywhere. Now we’re moving into manycore. And that has an effect I don’t think a lot of people talk about. Ten years ago, when I was teaching parallel programming and even five or six years ago, there were still enough single-core machines around that when I talked to people about adding parallelism, if they weren’t in HPC, they had to worry about single-core machines. And I can promise you that the best serial algorithm and the best parallel algorithm — assuming you can do the same thing in parallel — are usually different. And it may be subtle, but it’s usually enough of a headache that a lot of people were left (outside of HPC) having to run a conditional in their program, to say if I’m only running on single-core let’s do this and if I’m in parallel, let’s do it in parallel. And it might be as simple as if-def’ing their OpenMP or not compiling it with OpenMP, but if you wanted it to run on two machines, a single-core and a multicore, you pretty much had to have a parallel version and a non-parallel for a lot of critical things because the parallel program tended to have just a tiny bit of overhead that if run on a serial machine it would slow down.

So I ran into cases 8-10 years ago where someone would implement something in parallel and it would run 40 percent faster on a dual-core machine, but 20 percent slower on a single-core because of the little overhead. Now we don’t have to worry about this anymore, even when I go outside of HPC – HPC’s been parallel for so long – although the node level is kind of doing the same thing in this time period. Seriously, I was in an AT&T store and they were advertising that they had quad-core tablets and I was laughing. There’s also some octo-core things now in that world. I just laughed because it reinforces my point that we don’t need to worry about single-core processors anymore and that’s pretty significant in the world of programming for this next decade, that it doesn’t hold back the stack – we don’t have to program twice in any field anymore. We can just assume parallel cores. I think that that’s a big deal.

I’m a big believer in the democratization of parallel programming and HPC. I think that we keep seeing things that make it more accessible, one of them is having parallel compute everywhere, the other is advancing tools. And I think we saw a couple of things introduced in 2015 towards that democratization that combined with the fact that everything’s parallel is going to be transformative in the upcoming year and decade. A few of them are big things that I was involved with at Intel. One was that we’ve had a pretty successful foray into promoting code modernization. To be honest, I wasn’t so sure myself because I’ve been talking about parallelization so long, that I thought everyone was listening. I think there’s a lot of dialogue left to happen to truly get all of us to understand the ways to utilize parallelism. In our code modernization efforts, we’ve had things ranging from on-site trainings and events to online webinars and tools and they’ve been extraordinarily popular.

I’m also very excited about OpenHPC, and my perspective is as I visit all these different computer centers and I get the wonderful opportunity to have logons on different supercomputers around the world. I can use systems at TACC and Argonne and CSCS in Switzerland and many others, and they all solve similar problems. They all bring together, for the most part, all these different open source packages. They usually have multiple compilers on them; they have a way to allocate parts of the machine. They have a way to determine which version of GCC you’re using along with which version of the Intel compiler, etc. They all solve the same problems, but they all do it differently. There are quite a few people like myself that have logons on multiple supercomputers. So if you talk to scientists doing their work, lots of them have multiple logons and they have to learn each one, but that also means that they aren’t sharing as many BKMs [i.e., best-known methods]. There’s a lot of replication and when you look at the people that are supporting your supercomputer, I think there’s a lot of opportunity to bring more commonality in there and let the staff that you have focus on higher level concerns or newer things. So OpenHPC really excites me because it’s bringing together packages much like these centers already have, and validating them — leaving it with the flexibility that you can pick and choose, but at least giving a base-line that’s validated that they all work together — give a solution to these common problems, even have pre-built binaries.

And I’ve had the good-fortune to sit in on the community sessions, people in HPC that have been debating, and it’s interesting because there have been some pretty heated debates about what are the best way to solve some of these problems, but at the end of the day they may come up with two solutions to a problem or maybe they’ll pick one that’s best. But then they’re kind of solving it industry-wide instead of one compute center at a time, and I think that’s going to help with democratization of supercomputers of HPC. So that got off the ground in 2015, and I think 2016 will be very interesting to see how that evolves. I expect to see more people join it and I expect to see a lot of heated debates about what the best way to solve something is. But these are the sort of debates that have never really happened before because one compute center can have an argument with another computer center what the best way to do something is and they can both go off and do it differently. OpenHPC gives them opportunity for the debate to happen and then maybe stick with one solution that both centers or lots of centers evolve.

I’m also really excited that we got three Knights Landing machines deployed outside of Intel. In 2016, we’ll see that unfold and there is enormous anticipation over Knights Landing, I think it’s very well justified because I think taking this scalable manycore to a processor is going to be a remarkable transformation in the parallel computing field with a very bright future ahead of it.

HPCwire: With regard to OpenHPC, do you expect that more wary associates like IBM, which has been pushing the OpenPOWER ecosystem so strongly, would also be a member? We talked with them and they said they were looking at it for pretty much the reasons you’ve outlined. What are your thoughts about membership?

Reinders: You know I can’t speak for IBM or predict what they are going to do but I do think that the purposes of OpenHPC, the problems they’re solving, would definitely be beneficial to IBM and quite a few other companies and centers that haven’t joined yet. I think a lot of people learned about it at supercomputing so that’s not a surprise. The goals of OpenHPC is certainly to be a true open community group. So the Linux Foundation – it’s their thing, we are heavily involved obviously, and they need to come up with the governance models and so forth, but I can say that it would be an extreme  disappointment if it wasn’t open enough that everybody felt welcome enough to come participate and benefit from it. So I certainly hope to see them participate in 2016, but I think that ball is in their camp.

There’s been some dialogue or debate about whether OpenHPC is Intel’s answer to OpenPower and I don’t think that is the right way to look at it. OpenHPC has the opportunity to bring the entire industry together as opposed to be partisan to one architecture or another. Now that said, our heavy involvement in OpenHPC getting started means that we did what we do best which is our best effort at making sure that there are recipes already written up for our architecture, but hopefully they weren’t written in a way that you can’t just go write one for POWER or any other architecture. That wasn’t the goal, but frankly we’re not experts in other people’s architecture. So hopefully, what we’ve done we’ve left open enough so someone else can come in and if they want to invest effort for their own architecture, do so. It didn’t include the specification of a microprocessor in its design or anything, so it’s definitely different than OpenPower in that respect.

HPCwire: To recap, what are the top five things you are looking forward to in 2016?

Reinders: The top one to me is Knights Landing getting more available beyond the three systems that are out there. I think that’s going to be huge. Having a manycore processor instead of a coprocessor is going to fuel a lot of interesting results and debates, which I think will be great. I think OpenHPC is going to be very interesting, watching how that evolves. Nothing comes for free, so it’s going to be up to the folks that show up to the table in the community contributing, but I think that will be very significant during the year.

The other two areas I look forward to seeing evolve this year are code modernization and big data. I like seeing how we can get better and better at explaining the benefits of parallel computing to a broader set of users and to the users that you already think are doing parallel programming. I think code modernization will continue to stay on the docket as a very important dialogue. And then I think that big data, including data analytics and machine learning, will continue to see very significant developments with more nitty gritty work going on. There have been a lot of demonstrated kernels and some interesting work done, but this year the ramp-up is going to continue very fast. The interest in big data and what it can do for companies is very significant and I think we’ll continue to see a lot of things pop out there.

The other thing I’m following closely is the shift of visualization to the CPU. We’ve had some really interesting work in that area. There’s kind of been an assumption that when you’re doing visualization that having a specialty piece of silicon or GPU to do the visualization must be the answer, but it turns out GPUs are focused on the sort of visualization you need to do to display on the screen, the rasterization. A lot of visualization work is going on on supercomputers and machines where there are a lot of of benefits to not rushing the rasterization so quickly. In particular ray tracing, we’re seeing a lot of use there where ray tracing is clearly much better on the CPU, including Knights Landing. It’s been interesting watching that surprise people. There are a lot of people in the know that are doing visualization on CPUs and finding much higher performance for their purposes. I’ll go out and add that to my list since you asked for five things. I think in 2016 there will be more aha’s and realizations that visualization is increasingly becoming a CPU problem.

HPCwire: What are your thoughts on the National Strategic Computing Initiative to coordinate national efforts to pursue exascale and maximize the benefits of HPC, and should there be more investment in national research centers for software?

Reinders: I do like to point out that hardware is meaningless without software, so yes the software challenges are substantial. If I had any say in it I would encourage us to worry more about the connection of software to the domain experts rather than in a pure computer science fashion. I think there is a lot of interesting work going on in that space. If you had such centers, I would think of them as being applied science, and I think that would be an area of applied science that would be very useful.

As for the National Strategic Computing Initiative, how can I not love it? I think the fate of nations rests on their ability to harness compute power. There’s no doubt about that. Whether we want to be so dramatic to call it a battlefield, it is definitely an area competition. As an American, I’m very glad to see my country not missing that point. I just got back from India, another large democracy, and they are having very similar discussion in their country and they are rolling out their initiatives. Every country has to consider the role that computing, especially high-performance computing, plays in the competition of their nation. The US has been so long a leader in this area, I think our dialogue is about how to continue to lead the world by our own activities.

This was the second part of a two-part interview. To read the first half, where Reinders discusses the architectural trade-offs of Knights Landing’s manycore design and offers advice for expectant users, go here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Battle Brews over Trump Intentions for Funding Science

February 27, 2017

The battle over science funding – how much and for what kinds of science – Read more…

By John Russell

Google Gets First Dibs on New Skylake Chips

February 27, 2017

As part of an ongoing effort to differentiate its public cloud services, Google made good this week on its intention to bring custom Xeon Skylake chips from Intel Corp. Read more…

By George Leopold

Thomas Sterling on CREST and Academia’s Role in HPC Research

February 27, 2017

The US advances in high performance computing over many decades have been a product of the combined engagement of research centers in industry, government labs, and academia. Read more…

By Thomas Sterling, Indiana University

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPE Extreme Performance Solutions

Manufacturers Reaping the Benefits of Remote Visualization

Today’s manufacturers are operating in an ever-changing atmosphere, and finding new ways to boost productivity has never been more vital.

This is why manufacturers are ramping up their investments in high performance computing (HPC), a trend which has helped give rise to the “connected factory” and Industrial Internet of Things (IIoT) concepts that are proliferating throughout the industry today. Read more…

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

Thomas Sterling on CREST and Academia’s Role in HPC Research

February 27, 2017

The US advances in high performance computing over many decades have been a product of the combined engagement of research centers in industry, government labs, and academia. Read more…

By Thomas Sterling, Indiana University

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This