Scaling the Exa

By Nicole Hemsoth

June 3, 2010

The petascale era of supercomputing is barely underway, but the effort to reach the exascale level has already begun. In fact, it began three years ago as part of an international effort to develop a software infrastructure for exaflop supercomputers.

The International Exascale Software Project (IESP) was formed with the realization that current software used for terascale and now petascale computing is inadequate for exascale computing. The IESP brings together government agencies, vendors and other stakeholders in the HPC community, with the goal of designing and building a system software stack to support this future level of computing. That will entail managing parallelism an order of magnitude higher than the current top systems in the field today.

The University of Tennessee’s Jack Dongarra has been involved with the IESP since its conception back in 2007. At ISC’10 he chaired a session that outlined its goals and gave a status report on the project’s progress. We got a chance to speak with him before the conference to discuss exascale software, the project, and the importance of developing this software for the global HPC community.

HPCwire: We had to go through a transition like this before. What happened to software in the transition from terascale to petascale?

Jack Dongarra: Today we have very little software that runs at the petascale level. We have software approaching terascale software, in that it routinely performs at the teraflop levels on our largest machines. Only through extreme efforts do we get to claim petaflop levels for our applications. It really requires a rethinking.

When we made the transition from vector machines to parallel systems, that was a big deal. We’re encountering the same kinds of transition today in terms of rewriting our software, just in terms of the things that I deal with, which is writing numerical libraries. We’re rewriting everything to address issues of multicore.

Multicore presents many challenges in terms of performance that were not present with parallel computing. I know that seems a little strange, but it’s because of the fact that with multicore, things happen much faster. So the bandwidth has increased, latency has gotten better. So you can’t hesitate in what you’re doing. You’ll lose too much performance.

The model that we had for parallel processing was a fork-join sort of model — what I’ll call a bulk synchronous form. It was a loop then you did a bunch of things in parallel then you joined together at the end of that loop. You can’t do that with multicore. You need to do more asynchronous processing.

So you need to develop algorithms that really present a form of execution that is asynchronous and breaks that model of loop-level parallelism, because waiting for the tasks to finish is just too inefficient on these systems. It requires a rethinking of our algorithms and a rewriting of our software. So it’s that kind of thing that we have to go through again as we go to exascale.

HPCwire: Is this transition going to be different?

Dongarra: I think it is different, and it’s different for a few reasons. One is that we learned some lessons from the previous transitions that took place, and we don’t want to repeat that experience. The second reason is that there’s a general recognition that this change is going to more dramatic than it was in the previous transition. Going from thousands to hundreds of thousands of threads of execution, which is what we did before, is going to be different than going from hundreds of thousand to perhaps billions of threads. That change is going to have an enormous impact. And tied together with some of the architectural features that are being proposed today for exascale systems, is going to lead to a lot of tension, right at the software point.

Because of the steepness of the ascent from petascale to exascale, we should start this process as soon as possible. The extreme parallelism, the hybrid design, and because the tightening of the memory bandwidth bottleneck is going to become more extreme as we move to the future, we have to start addressing these issues now.

Also, the relative amount of memory that we have on exascale systems — that balance between FLOPS and bytes — is going to be changing. In the old, old days we thought: one byte per FLOP. When you look at petascale machines, that ratio has changed quite a bit, and when you look toward exascale, it’s going to change again in an even more dramatic way. That will cause some issues with the ability of our algorithms to scale as you grow the problem size.

The other issue deals with fault tolerance. When you have billions of parallel things, we’re going to have failure. So it’s going to become more of a normal part of computing that we’re going to be dropping or losing part of the computation. We have to be prepared to adjust to that somehow. In the past, we didn’t have to worry so much about that, and when we did, we performed a checkpoint and a restart. Well, for exascale, you can’t do a checkpoint. There’s just too much memory in the system, so it would take too long.

The software infrastructure can’t deal with that today, so it’s a call to action to deal with these hardware changes. If we don’t do anything, the software ecosystem would remain stagnant. So we have to look at different approaches and perhaps be more involved in the design of architecture, in the sense there will be a co-design with algorithms and applications people, and helping to design machines that make sense.

HPCwire: Do you think there’s general agreement about what the hardware will look like?

Dongarra: There are a number of constraints of the architecture for exascale. One constraint is cost. Everybody says a machine can cost no more than $200 million. You’re going to spend half your money on memory, so you have take that into consideration.

There are also other constraints that come into play. For example, the machine can consume no more than 20 MW. That’s thought to be the upper limit for a reasonable machine from the standpoint of power, cooling, etc. The machine we have here at Oak Ridge — the Jaguar supercomputer — is about 7 megawatts.

And then there’s the question of what kind of processors are we going to have. The thinking today is that there’s going to be two paths — what some people call them swim lanes — to exascale hardware.

One is going to be lightweight processors. By lightweight, we mean things like the Blue Gene [PowerPC] processor. One general way to characterize this architecture is 1GHz in processor speed, one thousand cores per node, and one million nodes per system. A second path to exascale is commodity processors together with accelerators, such as GPUs. The software would support both those models, although there would be differences we’d have to deal with.

Both of the models generate 10^18 FLOPS and both have on the order of a billion threads of execution. We realize that represents a lot of parallel processing and we need to support that in some manner. That’s today’s view of the hardware, although clearly, that could change.

HPCwire: So how would you engage vendors to build these exascale machines. What’s the business case?

Dongarra: Well, the business case may mean that the government, or governments, would have to provide incentives to the manufacturers, that is, to put up money so that they develop architectures in this direction. We can’t expect the vendors to drop the commodity side of their business to address this very small niche activity unless there’s an incentive to do so. I think the government is prepared to provide those incentives, and to work with the applications people to change that current model that we have, where things are just thrown over the fence.

The other thing that we realize is that we do have a very good mechanism for coordinating research at a global level. There’s some level of coordination done between the DOE and NSF, but there’s really no coordination across country boundaries. We’re looking at the EC, and the activities they have, the Japanese, perhaps the Chinese and Koreans, and so on, and trying to understand how to attack the software issues, by looking at dividing the work.

That requires a higher level of coordination at the government funding level to be able to target research in certain areas so we don’t duplicate efforts too much. And then we can also work together on things we have a mutual interest in.

The G8 countries recently put out a call for exascale software for applications. Seven of the G8 countries — the US, Canada, the UK, France, Germany, Japan, and Russia — have gotten together and put money on the table — 10 million Euros — to fund research and evaluate collaborative proposals on exascale software. They’re going to evaluate the proposals that were submitted and ask a certain number of the them to refine their ideas and submit full proposals. Part of ground rules for this is that you had to have a minimum of three countries involved in the proposal. This G8 initiative used the IESP as a model for describing what they wanted.

HPCwire: In a broad sense, what is the goal of the IESP?

Dongarra: The goal of the IESP is to come up with an international plan for developing the next generation of open source software for high performance scientific computing. So our goal is to develop a roadmap, and that roadmap would lay out issues, priorities, and describe the software stack that’s necessary for exascale.

This software stack has things from the system side, like operating systems, I/O, the external environment and system management. It also deals with the development environment, which looks at programming models, frameworks for developing applications, compilers, numerical libraries and debugging tools. There’s another element that tries to integrate applications and use them as a vehicle for testing the ideas.

And finally there’s an avenue that I’ll call cross-cutting issues — issues that really impact all of the software that we’re talking about. That has to do with resilience, power management, performance optimization, and overall programmability.

Today we don’t really have this global evaluation of missing components within the stack itself. We want to make sure that we understand what the needs are and that the research would cover those needs. So we’re trying to define and develop the priorities to help with this planning process.

Ultimately we feel the scale of investments is such that we really need an international input on the requirements, so we want to work together with Americans, Europeans, and Asians and really develop this larger vision for high performance computing — something that hasn’t been done in the past.

All of this sits on top of a recognition that these things are driven by the applications. We’re not just developing software in isolation. The applications people feel it’s critical to have exascale computing to further their area of research. The US DOE and NSF have been very strong in terms of developing those science drivers — areas like climate, nuclear energy, combustion, advanced materials, C02 sequestration, and basic science. These all play a part in the needs for exascale. So we’re working with the applications people in getting to that level.

HPCwire: The stack you’re describing, from the OS on down, sounds like a rather substantial body of software. How would it be maintained?

Dongarra: Once it gets developed, a mechanism has to be put in place for the care of the software. There’s a path to exascale. Going from petaflops to 10 petaflops to 100 petaflops, and finally to exascale, are going to require changes along the way. It will require a redeployment in certain areas and a strategy for phasing in the software and the research to necessary to develop it.

And there has to be the ultimate repositing of the information and keeping it in a state where it can, in fact, be used. So yes, that becomes an important aspect of the exascale software initiative.

HPCwire: An example of this approach that comes to mind is the MPI effort, which came out of the HPC research community, and was subsequently supported by vendors. Do you see that as a model for what’s being done here, but at a much broader scale?

Dongarra: Absolutely. We have a community that develops software and vendors picking it up, perhaps refining it, and adding value to the software for their own hardware platforms. MPI is a good example, where we have a standard, which is not software, but a description of what the software should do. And then we have activities that provide a working version of that standard. MPICH is a good example of that; Open MPI is another.

Open MPI is more of a community-involved effort that has input from a larger group to develop an open source implementation. Open source is one of the major goals of the exascale software initiative, although we don’t specify the exact licensing structure within that context. That’s something we’ll have to face at some point.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s Hot and What’s Not at ISC 2018?

June 22, 2018

As the calendar rolls around to late June we see the ISC conference, held in Frankfurt (June 24th-28th), heave into view. With some of the pre-show announcements already starting to roll out, what do we think some of the Read more…

By Dairsie Latimer

Servers in Orbit, HPE Apollos Make 4,500 Trips Around Earth

June 22, 2018

The International Space Station shines a little brighter in the night sky thanks to what amounts to an orbiting supercomputer lofted to the outpost last year as part of a year-long experiment to determine if high-end com Read more…

By George Leopold

HPCwire Readers’ and Editors’ Choice Awards Turns 15

June 22, 2018

A hallmark of sustainability is this: If you are not serving a need effectively and efficiently you do not last. The HPCwire Readers’ and Editors’ Choice awards program has stood the test of time. Each year, our read Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Taking the AI Training Wheels Off: From PoC to Production

Even though it seems simple now, there were a lot of skills to master in learning to ride a bike. From balancing on two wheels, and steering in a straight line, to going around corners and stopping before running over the dog, it took lots of practice to master these skills. Read more…

Tribute: Dr. Bob Borchers, 1936-2018

June 21, 2018

Dr. Bob Borchers, a leader in the high performance computing community for decades, passed away peacefully in Maui, Hawaii, on June 7th. His memorial service will be held on June 22nd in Reston, Virginia. Dr. Borchers Read more…

By Ann Redelfs

What’s Hot and What’s Not at ISC 2018?

June 22, 2018

As the calendar rolls around to late June we see the ISC conference, held in Frankfurt (June 24th-28th), heave into view. With some of the pre-show announcement Read more…

By Dairsie Latimer

Servers in Orbit, HPE Apollos Make 4,500 Trips Around Earth

June 22, 2018

The International Space Station shines a little brighter in the night sky thanks to what amounts to an orbiting supercomputer lofted to the outpost last year as Read more…

By George Leopold

HPCwire Readers’ and Editors’ Choice Awards Turns 15

June 22, 2018

A hallmark of sustainability is this: If you are not serving a need effectively and efficiently you do not last. The HPCwire Readers’ and Editors’ Choice aw Read more…

By Tiffany Trader

ISC 2018 Preview from @hpcnotes

June 21, 2018

Prepare for your social media feed to be saturated with #HPC, #ISC18, #Top500, etc. Prepare for your mainstream media to talk about supercomputers (in between t Read more…

By Andrew Jones

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

European HPC Summit Week and PRACEdays 2018: Slaying Dragons and SHAPEing Futures One SME at a Time

June 20, 2018

The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened May 29, 2018. The conference was chair Read more…

By Elizabeth Leake (STEM-Trek for HPCwire)

Cray Introduces All Flash Lustre Storage Solution Targeting HPC

June 19, 2018

Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose p Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Google Charts Two-Dimensional Quantum Course

April 26, 2018

Quantum error correction, essential for achieving universal fault-tolerant quantum computation, is one of the main challenges of the quantum computing field and it’s top of mind for Google’s John Martinis. At a presentation last week at the HPC User Forum in Tucson, Martinis, one of the world's foremost experts in quantum computing, emphasized... Read more…

By Tiffany Trader

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This