Scaling the Exa

By Nicole Hemsoth

June 3, 2010

The petascale era of supercomputing is barely underway, but the effort to reach the exascale level has already begun. In fact, it began three years ago as part of an international effort to develop a software infrastructure for exaflop supercomputers.

The International Exascale Software Project (IESP) was formed with the realization that current software used for terascale and now petascale computing is inadequate for exascale computing. The IESP brings together government agencies, vendors and other stakeholders in the HPC community, with the goal of designing and building a system software stack to support this future level of computing. That will entail managing parallelism an order of magnitude higher than the current top systems in the field today.

The University of Tennessee’s Jack Dongarra has been involved with the IESP since its conception back in 2007. At ISC’10 he chaired a session that outlined its goals and gave a status report on the project’s progress. We got a chance to speak with him before the conference to discuss exascale software, the project, and the importance of developing this software for the global HPC community.

HPCwire: We had to go through a transition like this before. What happened to software in the transition from terascale to petascale?

Jack Dongarra: Today we have very little software that runs at the petascale level. We have software approaching terascale software, in that it routinely performs at the teraflop levels on our largest machines. Only through extreme efforts do we get to claim petaflop levels for our applications. It really requires a rethinking.

When we made the transition from vector machines to parallel systems, that was a big deal. We’re encountering the same kinds of transition today in terms of rewriting our software, just in terms of the things that I deal with, which is writing numerical libraries. We’re rewriting everything to address issues of multicore.

Multicore presents many challenges in terms of performance that were not present with parallel computing. I know that seems a little strange, but it’s because of the fact that with multicore, things happen much faster. So the bandwidth has increased, latency has gotten better. So you can’t hesitate in what you’re doing. You’ll lose too much performance.

The model that we had for parallel processing was a fork-join sort of model — what I’ll call a bulk synchronous form. It was a loop then you did a bunch of things in parallel then you joined together at the end of that loop. You can’t do that with multicore. You need to do more asynchronous processing.

So you need to develop algorithms that really present a form of execution that is asynchronous and breaks that model of loop-level parallelism, because waiting for the tasks to finish is just too inefficient on these systems. It requires a rethinking of our algorithms and a rewriting of our software. So it’s that kind of thing that we have to go through again as we go to exascale.

HPCwire: Is this transition going to be different?

Dongarra: I think it is different, and it’s different for a few reasons. One is that we learned some lessons from the previous transitions that took place, and we don’t want to repeat that experience. The second reason is that there’s a general recognition that this change is going to more dramatic than it was in the previous transition. Going from thousands to hundreds of thousands of threads of execution, which is what we did before, is going to be different than going from hundreds of thousand to perhaps billions of threads. That change is going to have an enormous impact. And tied together with some of the architectural features that are being proposed today for exascale systems, is going to lead to a lot of tension, right at the software point.

Because of the steepness of the ascent from petascale to exascale, we should start this process as soon as possible. The extreme parallelism, the hybrid design, and because the tightening of the memory bandwidth bottleneck is going to become more extreme as we move to the future, we have to start addressing these issues now.

Also, the relative amount of memory that we have on exascale systems — that balance between FLOPS and bytes — is going to be changing. In the old, old days we thought: one byte per FLOP. When you look at petascale machines, that ratio has changed quite a bit, and when you look toward exascale, it’s going to change again in an even more dramatic way. That will cause some issues with the ability of our algorithms to scale as you grow the problem size.

The other issue deals with fault tolerance. When you have billions of parallel things, we’re going to have failure. So it’s going to become more of a normal part of computing that we’re going to be dropping or losing part of the computation. We have to be prepared to adjust to that somehow. In the past, we didn’t have to worry so much about that, and when we did, we performed a checkpoint and a restart. Well, for exascale, you can’t do a checkpoint. There’s just too much memory in the system, so it would take too long.

The software infrastructure can’t deal with that today, so it’s a call to action to deal with these hardware changes. If we don’t do anything, the software ecosystem would remain stagnant. So we have to look at different approaches and perhaps be more involved in the design of architecture, in the sense there will be a co-design with algorithms and applications people, and helping to design machines that make sense.

HPCwire: Do you think there’s general agreement about what the hardware will look like?

Dongarra: There are a number of constraints of the architecture for exascale. One constraint is cost. Everybody says a machine can cost no more than $200 million. You’re going to spend half your money on memory, so you have take that into consideration.

There are also other constraints that come into play. For example, the machine can consume no more than 20 MW. That’s thought to be the upper limit for a reasonable machine from the standpoint of power, cooling, etc. The machine we have here at Oak Ridge — the Jaguar supercomputer — is about 7 megawatts.

And then there’s the question of what kind of processors are we going to have. The thinking today is that there’s going to be two paths — what some people call them swim lanes — to exascale hardware.

One is going to be lightweight processors. By lightweight, we mean things like the Blue Gene [PowerPC] processor. One general way to characterize this architecture is 1GHz in processor speed, one thousand cores per node, and one million nodes per system. A second path to exascale is commodity processors together with accelerators, such as GPUs. The software would support both those models, although there would be differences we’d have to deal with.

Both of the models generate 10^18 FLOPS and both have on the order of a billion threads of execution. We realize that represents a lot of parallel processing and we need to support that in some manner. That’s today’s view of the hardware, although clearly, that could change.

HPCwire: So how would you engage vendors to build these exascale machines. What’s the business case?

Dongarra: Well, the business case may mean that the government, or governments, would have to provide incentives to the manufacturers, that is, to put up money so that they develop architectures in this direction. We can’t expect the vendors to drop the commodity side of their business to address this very small niche activity unless there’s an incentive to do so. I think the government is prepared to provide those incentives, and to work with the applications people to change that current model that we have, where things are just thrown over the fence.

The other thing that we realize is that we do have a very good mechanism for coordinating research at a global level. There’s some level of coordination done between the DOE and NSF, but there’s really no coordination across country boundaries. We’re looking at the EC, and the activities they have, the Japanese, perhaps the Chinese and Koreans, and so on, and trying to understand how to attack the software issues, by looking at dividing the work.

That requires a higher level of coordination at the government funding level to be able to target research in certain areas so we don’t duplicate efforts too much. And then we can also work together on things we have a mutual interest in.

The G8 countries recently put out a call for exascale software for applications. Seven of the G8 countries — the US, Canada, the UK, France, Germany, Japan, and Russia — have gotten together and put money on the table — 10 million Euros — to fund research and evaluate collaborative proposals on exascale software. They’re going to evaluate the proposals that were submitted and ask a certain number of the them to refine their ideas and submit full proposals. Part of ground rules for this is that you had to have a minimum of three countries involved in the proposal. This G8 initiative used the IESP as a model for describing what they wanted.

HPCwire: In a broad sense, what is the goal of the IESP?

Dongarra: The goal of the IESP is to come up with an international plan for developing the next generation of open source software for high performance scientific computing. So our goal is to develop a roadmap, and that roadmap would lay out issues, priorities, and describe the software stack that’s necessary for exascale.

This software stack has things from the system side, like operating systems, I/O, the external environment and system management. It also deals with the development environment, which looks at programming models, frameworks for developing applications, compilers, numerical libraries and debugging tools. There’s another element that tries to integrate applications and use them as a vehicle for testing the ideas.

And finally there’s an avenue that I’ll call cross-cutting issues — issues that really impact all of the software that we’re talking about. That has to do with resilience, power management, performance optimization, and overall programmability.

Today we don’t really have this global evaluation of missing components within the stack itself. We want to make sure that we understand what the needs are and that the research would cover those needs. So we’re trying to define and develop the priorities to help with this planning process.

Ultimately we feel the scale of investments is such that we really need an international input on the requirements, so we want to work together with Americans, Europeans, and Asians and really develop this larger vision for high performance computing — something that hasn’t been done in the past.

All of this sits on top of a recognition that these things are driven by the applications. We’re not just developing software in isolation. The applications people feel it’s critical to have exascale computing to further their area of research. The US DOE and NSF have been very strong in terms of developing those science drivers — areas like climate, nuclear energy, combustion, advanced materials, C02 sequestration, and basic science. These all play a part in the needs for exascale. So we’re working with the applications people in getting to that level.

HPCwire: The stack you’re describing, from the OS on down, sounds like a rather substantial body of software. How would it be maintained?

Dongarra: Once it gets developed, a mechanism has to be put in place for the care of the software. There’s a path to exascale. Going from petaflops to 10 petaflops to 100 petaflops, and finally to exascale, are going to require changes along the way. It will require a redeployment in certain areas and a strategy for phasing in the software and the research to necessary to develop it.

And there has to be the ultimate repositing of the information and keeping it in a state where it can, in fact, be used. So yes, that becomes an important aspect of the exascale software initiative.

HPCwire: An example of this approach that comes to mind is the MPI effort, which came out of the HPC research community, and was subsequently supported by vendors. Do you see that as a model for what’s being done here, but at a much broader scale?

Dongarra: Absolutely. We have a community that develops software and vendors picking it up, perhaps refining it, and adding value to the software for their own hardware platforms. MPI is a good example, where we have a standard, which is not software, but a description of what the software should do. And then we have activities that provide a working version of that standard. MPICH is a good example of that; Open MPI is another.

Open MPI is more of a community-involved effort that has input from a larger group to develop an open source implementation. Open source is one of the major goals of the exascale software initiative, although we don’t specify the exact licensing structure within that context. That’s something we’ll have to face at some point.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Helps ROSIE Bioscience Gateway Expand its Impact

April 26, 2017

Biomolecule structure prediction has long been challenging not least because the relevant software and workflows often require high-end HPC systems that many bioscience researchers lack easy access to. Read more…

By John Russell

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

IBM, Nvidia, Stone Ridge Claim Gas & Oil Simulation Record

April 25, 2017

IBM, Nvidia, and Stone Ridge Technology today reported setting the performance record for a “billion cell” oil and gas reservoir simulation. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Remote Visualization Optimizing Life Sciences Operations and Care Delivery

As patients continually demand a better quality of care and increasingly complex workloads challenge healthcare organizations to innovate, investing in the right technologies is key to ensuring growth and success. Read more…

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

Musk’s Latest Startup Eyes Brain-Computer Links

April 21, 2017

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers. Read more…

By George Leopold

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

CERN openlab Explores New CPU/FPGA Processing Solutions

April 14, 2017

Through a CERN openlab project known as the ‘High-Throughput Computing Collaboration,’ researchers are investigating the use of various Intel technologies in data filtering and data acquisition systems. Read more…

By Linda Barney

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Leading Solution Providers

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This