US Exascale Computing Update with Paul Messina

By Tiffany Trader

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers. Earlier this year, the United States announced its goal to stand up two capable exascale machines by 2023 as part of the Exascale Computing Project and Distinguished Argonne Fellow Dr. Paul Messina is leading the charge.

Since the project launched last February ECP has awarded $122 million in funding with $39.8 million going toward 22 application development projects, $34 million for 35 software development proposals and $48 million for four co-design centers. At SC16, we spoke with Dr. Messina about the mission of the project, the progress made so far — including a review of these three funding rounds — and the possibility of an accelerated timeline.

Here are highlights from that discussion (the full interview is included at the end of the article).

Why exascale matters

“In the history of computing as one gets the ability to do more calculations or deal with more data, we are able to tackle problems we couldn’t deal with otherwise. A lot of the problems that over the years we first could simulate and validate with an experiment in one-dimension, we’re now able to do it in two or three-dimensions. With exascale, we expect to be able to do things in much greater scale and with more fidelity. In some cases we hope to be able to do predictive simulations, not just to verify that something works the way we thought it would. An example of that would be discovering new materials that are better for batteries, for energy storage.

“Exascale is an arbitrary stepping stone along the way that will continue. Just as we had gigaflops and teraflops, peta- and so on, exascale is one along the way. But when you have an increase in compute power by a factor of one-hundred, chances are you will be able to tackle things that you cannot tackle now. Even at this conference you will hear about certain problems that exascale isn’t good enough for, so that indicates that it’s a stepping stone along the way. But we have identified dozens of applications that are important, problems that can’t be solved today and that we believe with exascale capability we will be able to solve. Precision medicine is one, additive manufacturing for very complex materials is another, climate science and carbon capture simulation, for example, are among the applications we are investing in.”

On the significance of ECP being a project as opposed to a program

“There have been research efforts and investigations into exascale since 2007, nine years ago. At the point that it became a project, it indicates that we really want to get going on it. The reason it is a project is that there are so many things that have to be done simultaneously and in concert with each other. The general outline of the project is that we invest in applications, we invest in the software stack, we invest in hardware technology with the vendor community — the people who develop the technologies so that those technologies will eventually land in products that will be in exascale systems and that will be better suited to our applications — and we also invest in the facilities from their knowledge of what works when they install systems. Those four big pieces have to work together and this is a holistic approach.

“The project will have milestones, some of which are shared between the applications and the software so if application A says ‘I need a programming language feature to express this kind of calculation more easily,’ then we want the compiler and  programming models part of the software to try to address that but then they have to address it together — if it doesn’t work, try again. That’s why it’s a project, because we have to orchestrate the various pieces. It can’t be just invent a nice programming model, tackle a very exciting application. We have to work together to be successful at exascale; same thing goes with the hardware architecture, the node technology and the system technology.”

The mission of ECP

“The mission is to create an exascale ecosystem so that towards the end of the project there will be companies that will be able to bid exascale systems in response to an RFP by the facilities, not the project, but the typical DOE facilities at Livermore, Argonne, Berkeley, Oak Ridge and Los Alamos. There will be a software stack that we hope will not only meet the needs of the exascale applications, but will also be a good HPC software stack because one of our goals is also to help industry and the medium-sized HPC users more easily get into HPC. If the software stack is compatible at the middle end as well as the very highest end, it gives them an on-ramp. And a major goal is the applications we are funding to be ready on day one to use the systems when they are installed. These systems have a lifetime of four to five years. If it takes two years for the applications to get ready to use them productively, half the life of the system has gone by before they can start cranking out results, so part of the ecosystem is a large cadre of application teams that know how to use exascale, they’ve implemented exciting applications, and that will help spread the knowledge and expertise.”

The global exascale race

“The fact that these countries and world regions like the EU have announced major investments in exascale development is an indication that exascale matters. Those countries would not be investing heavily in exascale development if they didn’t think it was useful. The US currently has a goal to develop exascale capability with systems installed and accepted in a time range of seven to ten years. It is a range, and certainly the government is considering an acceleration of that — it might be six to seven years. Any acceleration comes at a price. This project is investing very heavily in applications and software, not just on buying the system from vendors — so it’s a big investment but one that I think is necessary to be able to get the benefits of exascale, to have the applications ready to use and exploit the systems.

“Could we be doing better? If this project had started two-three years ago we would be farther ahead, but that didn’t happen. We got going about a year ago — it isn’t clear that we would be the first country that has an exaflop system. But remember I haven’t used the word exaflop until now. I’ve talked about exascale. What we’re focusing on is having applications and a software stack that runs effectively in a ratio that would indicate that it’s exascale. It might take two exaflops, so who gets an exaflop first might not be as important as who gets the equivalent of exascale. We also have goals around energy usage, 20-30 MW, which is a lot but if we didn’t have a goal like that we might end up with 60 or 100 MW, which is very expensive.

“If we are asked as a project to accelerate, we will do our best to accelerate — it will require more money and more risk, but within reason we will certainly do that.”

Sustainable exascale

“I often emphasize that for the technologies that we’re hoping the vendors will develop partly with our funding and the software stack that we’re developing in collaboration with universities and industry that that will create a sustainable ecosystem. It will not just be that we’ve gotten to exascale, systems can be anointed as exascale, we breath a sigh of relief and relax. It needs to be sustainable and that’s why we really want systems that are in the vendor’s product line — they’re not something they are building just for us one of a kind. It needs to be part of the business model that they want to follow, and software that is usable by many different applications, which will make it sustainable — open source almost exclusively, which again helps sustainability because many people can then contribute to it and help evolve it beyond exascale.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Researchers Test AI Traffic Monitoring Tool in Austin

December 13, 2017

Traffic jams and mishaps are often painful and sometimes dangerous facts of life. At this week’s IEEE International Conference on Big Data being held in Boston, researchers from TACC and colleagues will present a new Read more…

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in what has become an overwhelmingly two-socket landscape in the d Read more…

By John Russell

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as tech giants jockey to establish a pole position in the race toward commercialization of quantum. This week, Microsoft took the next step in advanci Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Explore the Origins of Space with COSMOS and Memory-Driven Computing

From the formation of black holes to the origins of space, data is the key to unlocking the secrets of the early universe. Read more…

ESnet Now Moving More Than 1 Petabyte/wk

December 12, 2017

Optimizing ESnet (Energy Sciences Network), the world's fastest network for science, is an ongoing process. Recently a two-year collaboration by ESnet users – the Petascale DTN Project – achieved its ambitious goal t Read more…

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in wha Read more…

By John Russell

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as tech giants jockey to establish a pole position in the race toward commercialization of Read more…

By Tiffany Trader

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be care Read more…

By Alex R. Larzelere

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Microsoft Spins Cycle Computing into Core Azure Product

December 5, 2017

Last August, cloud giant Microsoft acquired HPC cloud orchestration pioneer Cycle Computing. Since then the focus has been on integrating Cycle’s organization Read more…

By John Russell

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPE In-Memory Platform Comes to COSMOS

November 30, 2017

Hewlett Packard Enterprise is on a mission to accelerate space research. In August, it sent the first commercial-off-the-shelf HPC system into space for testing Read more…

By Tiffany Trader

SC17 Cluster Competition: Who Won and Why? Results Analyzed and Over-Analyzed

November 28, 2017

Everyone by now knows that Nanyang Technological University of Singapore (NTU) took home the highest LINPACK Award and the Overall Championship from the recently concluded SC17 Student Cluster Competition. We also already know how the teams did in the Highest LINPACK and Highest HPCG competitions, with Nanyang grabbing bragging rights for both benchmarks. Read more…

By Dan Olds

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This