US Exascale Computing Update with Paul Messina

By Tiffany Trader

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers. Earlier this year, the United States announced its goal to stand up two capable exascale machines by 2023 as part of the Exascale Computing Project and Distinguished Argonne Fellow Dr. Paul Messina is leading the charge.

Since the project launched last February ECP has awarded $122 million in funding with $39.8 million going toward 22 application development projects, $34 million for 35 software development proposals and $48 million for four co-design centers. At SC16, we spoke with Dr. Messina about the mission of the project, the progress made so far — including a review of these three funding rounds — and the possibility of an accelerated timeline.

Here are highlights from that discussion (the full interview is included at the end of the article).

Why exascale matters

“In the history of computing as one gets the ability to do more calculations or deal with more data, we are able to tackle problems we couldn’t deal with otherwise. A lot of the problems that over the years we first could simulate and validate with an experiment in one-dimension, we’re now able to do it in two or three-dimensions. With exascale, we expect to be able to do things in much greater scale and with more fidelity. In some cases we hope to be able to do predictive simulations, not just to verify that something works the way we thought it would. An example of that would be discovering new materials that are better for batteries, for energy storage.

“Exascale is an arbitrary stepping stone along the way that will continue. Just as we had gigaflops and teraflops, peta- and so on, exascale is one along the way. But when you have an increase in compute power by a factor of one-hundred, chances are you will be able to tackle things that you cannot tackle now. Even at this conference you will hear about certain problems that exascale isn’t good enough for, so that indicates that it’s a stepping stone along the way. But we have identified dozens of applications that are important, problems that can’t be solved today and that we believe with exascale capability we will be able to solve. Precision medicine is one, additive manufacturing for very complex materials is another, climate science and carbon capture simulation, for example, are among the applications we are investing in.”

On the significance of ECP being a project as opposed to a program

“There have been research efforts and investigations into exascale since 2007, nine years ago. At the point that it became a project, it indicates that we really want to get going on it. The reason it is a project is that there are so many things that have to be done simultaneously and in concert with each other. The general outline of the project is that we invest in applications, we invest in the software stack, we invest in hardware technology with the vendor community — the people who develop the technologies so that those technologies will eventually land in products that will be in exascale systems and that will be better suited to our applications — and we also invest in the facilities from their knowledge of what works when they install systems. Those four big pieces have to work together and this is a holistic approach.

“The project will have milestones, some of which are shared between the applications and the software so if application A says ‘I need a programming language feature to express this kind of calculation more easily,’ then we want the compiler and  programming models part of the software to try to address that but then they have to address it together — if it doesn’t work, try again. That’s why it’s a project, because we have to orchestrate the various pieces. It can’t be just invent a nice programming model, tackle a very exciting application. We have to work together to be successful at exascale; same thing goes with the hardware architecture, the node technology and the system technology.”

The mission of ECP

“The mission is to create an exascale ecosystem so that towards the end of the project there will be companies that will be able to bid exascale systems in response to an RFP by the facilities, not the project, but the typical DOE facilities at Livermore, Argonne, Berkeley, Oak Ridge and Los Alamos. There will be a software stack that we hope will not only meet the needs of the exascale applications, but will also be a good HPC software stack because one of our goals is also to help industry and the medium-sized HPC users more easily get into HPC. If the software stack is compatible at the middle end as well as the very highest end, it gives them an on-ramp. And a major goal is the applications we are funding to be ready on day one to use the systems when they are installed. These systems have a lifetime of four to five years. If it takes two years for the applications to get ready to use them productively, half the life of the system has gone by before they can start cranking out results, so part of the ecosystem is a large cadre of application teams that know how to use exascale, they’ve implemented exciting applications, and that will help spread the knowledge and expertise.”

The global exascale race

“The fact that these countries and world regions like the EU have announced major investments in exascale development is an indication that exascale matters. Those countries would not be investing heavily in exascale development if they didn’t think it was useful. The US currently has a goal to develop exascale capability with systems installed and accepted in a time range of seven to ten years. It is a range, and certainly the government is considering an acceleration of that — it might be six to seven years. Any acceleration comes at a price. This project is investing very heavily in applications and software, not just on buying the system from vendors — so it’s a big investment but one that I think is necessary to be able to get the benefits of exascale, to have the applications ready to use and exploit the systems.

“Could we be doing better? If this project had started two-three years ago we would be farther ahead, but that didn’t happen. We got going about a year ago — it isn’t clear that we would be the first country that has an exaflop system. But remember I haven’t used the word exaflop until now. I’ve talked about exascale. What we’re focusing on is having applications and a software stack that runs effectively in a ratio that would indicate that it’s exascale. It might take two exaflops, so who gets an exaflop first might not be as important as who gets the equivalent of exascale. We also have goals around energy usage, 20-30 MW, which is a lot but if we didn’t have a goal like that we might end up with 60 or 100 MW, which is very expensive.

“If we are asked as a project to accelerate, we will do our best to accelerate — it will require more money and more risk, but within reason we will certainly do that.”

Sustainable exascale

“I often emphasize that for the technologies that we’re hoping the vendors will develop partly with our funding and the software stack that we’re developing in collaboration with universities and industry that that will create a sustainable ecosystem. It will not just be that we’ve gotten to exascale, systems can be anointed as exascale, we breath a sigh of relief and relax. It needs to be sustainable and that’s why we really want systems that are in the vendor’s product line — they’re not something they are building just for us one of a kind. It needs to be part of the business model that they want to follow, and software that is usable by many different applications, which will make it sustainable — open source almost exclusively, which again helps sustainability because many people can then contribute to it and help evolve it beyond exascale.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX develop Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU technology conference, held virtually once more due to the ongoing pandemic, the company announced its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computi Read more…

Nvidia’s Newly DPU-Enabled SuperPOD Is a Multi-Tenant, Cloud-Native Supercomputer

April 12, 2021

At GTC 2021, Nvidia has announced an upgraded iteration of its DGX SuperPods, calling the new offering “the first cloud-native, multi-tenant supercomputer.” The newly announced SuperPods come just two years after the Read more…

Tune in to Watch Nvidia’s GTC21 Keynote with Jensen Huang – Recording Now Available

April 12, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

AWS Solution Channel

Volkswagen Passenger Cars Uses NICE DCV for High-Performance 3D Remote Visualization

 

Volkswagen Passenger Cars has been one of the world’s largest car manufacturers for over 70 years. The company delivers more than 6 million automobiles to global customers every year, from 50 production locations on five continents. Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U.S. Entity List bars U.S. firms from supplying key technolog Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU technology conference, held virtually once more due to the ongoing pandemic, the company announced its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fle Read more…

Nvidia’s Newly DPU-Enabled SuperPOD Is a Multi-Tenant, Cloud-Native Supercomputer

April 12, 2021

At GTC 2021, Nvidia has announced an upgraded iteration of its DGX SuperPods, calling the new offering “the first cloud-native, multi-tenant supercomputer.” Read more…

Tune in to Watch Nvidia’s GTC21 Keynote with Jensen Huang – Recording Now Available

April 12, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire