US Exascale Computing Update with Paul Messina

By Tiffany Trader

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers. Earlier this year, the United States announced its goal to stand up two capable exascale machines by 2023 as part of the Exascale Computing Project and Distinguished Argonne Fellow Dr. Paul Messina is leading the charge.

Since the project launched last February ECP has awarded $122 million in funding with $39.8 million going toward 22 application development projects, $34 million for 35 software development proposals and $48 million for four co-design centers. At SC16, we spoke with Dr. Messina about the mission of the project, the progress made so far — including a review of these three funding rounds — and the possibility of an accelerated timeline.

Here are highlights from that discussion (the full interview is included at the end of the article).

Why exascale matters

“In the history of computing as one gets the ability to do more calculations or deal with more data, we are able to tackle problems we couldn’t deal with otherwise. A lot of the problems that over the years we first could simulate and validate with an experiment in one-dimension, we’re now able to do it in two or three-dimensions. With exascale, we expect to be able to do things in much greater scale and with more fidelity. In some cases we hope to be able to do predictive simulations, not just to verify that something works the way we thought it would. An example of that would be discovering new materials that are better for batteries, for energy storage.

“Exascale is an arbitrary stepping stone along the way that will continue. Just as we had gigaflops and teraflops, peta- and so on, exascale is one along the way. But when you have an increase in compute power by a factor of one-hundred, chances are you will be able to tackle things that you cannot tackle now. Even at this conference you will hear about certain problems that exascale isn’t good enough for, so that indicates that it’s a stepping stone along the way. But we have identified dozens of applications that are important, problems that can’t be solved today and that we believe with exascale capability we will be able to solve. Precision medicine is one, additive manufacturing for very complex materials is another, climate science and carbon capture simulation, for example, are among the applications we are investing in.”

On the significance of ECP being a project as opposed to a program

“There have been research efforts and investigations into exascale since 2007, nine years ago. At the point that it became a project, it indicates that we really want to get going on it. The reason it is a project is that there are so many things that have to be done simultaneously and in concert with each other. The general outline of the project is that we invest in applications, we invest in the software stack, we invest in hardware technology with the vendor community — the people who develop the technologies so that those technologies will eventually land in products that will be in exascale systems and that will be better suited to our applications — and we also invest in the facilities from their knowledge of what works when they install systems. Those four big pieces have to work together and this is a holistic approach.

“The project will have milestones, some of which are shared between the applications and the software so if application A says ‘I need a programming language feature to express this kind of calculation more easily,’ then we want the compiler and  programming models part of the software to try to address that but then they have to address it together — if it doesn’t work, try again. That’s why it’s a project, because we have to orchestrate the various pieces. It can’t be just invent a nice programming model, tackle a very exciting application. We have to work together to be successful at exascale; same thing goes with the hardware architecture, the node technology and the system technology.”

The mission of ECP

“The mission is to create an exascale ecosystem so that towards the end of the project there will be companies that will be able to bid exascale systems in response to an RFP by the facilities, not the project, but the typical DOE facilities at Livermore, Argonne, Berkeley, Oak Ridge and Los Alamos. There will be a software stack that we hope will not only meet the needs of the exascale applications, but will also be a good HPC software stack because one of our goals is also to help industry and the medium-sized HPC users more easily get into HPC. If the software stack is compatible at the middle end as well as the very highest end, it gives them an on-ramp. And a major goal is the applications we are funding to be ready on day one to use the systems when they are installed. These systems have a lifetime of four to five years. If it takes two years for the applications to get ready to use them productively, half the life of the system has gone by before they can start cranking out results, so part of the ecosystem is a large cadre of application teams that know how to use exascale, they’ve implemented exciting applications, and that will help spread the knowledge and expertise.”

The global exascale race

“The fact that these countries and world regions like the EU have announced major investments in exascale development is an indication that exascale matters. Those countries would not be investing heavily in exascale development if they didn’t think it was useful. The US currently has a goal to develop exascale capability with systems installed and accepted in a time range of seven to ten years. It is a range, and certainly the government is considering an acceleration of that — it might be six to seven years. Any acceleration comes at a price. This project is investing very heavily in applications and software, not just on buying the system from vendors — so it’s a big investment but one that I think is necessary to be able to get the benefits of exascale, to have the applications ready to use and exploit the systems.

“Could we be doing better? If this project had started two-three years ago we would be farther ahead, but that didn’t happen. We got going about a year ago — it isn’t clear that we would be the first country that has an exaflop system. But remember I haven’t used the word exaflop until now. I’ve talked about exascale. What we’re focusing on is having applications and a software stack that runs effectively in a ratio that would indicate that it’s exascale. It might take two exaflops, so who gets an exaflop first might not be as important as who gets the equivalent of exascale. We also have goals around energy usage, 20-30 MW, which is a lot but if we didn’t have a goal like that we might end up with 60 or 100 MW, which is very expensive.

“If we are asked as a project to accelerate, we will do our best to accelerate — it will require more money and more risk, but within reason we will certainly do that.”

Sustainable exascale

“I often emphasize that for the technologies that we’re hoping the vendors will develop partly with our funding and the software stack that we’re developing in collaboration with universities and industry that that will create a sustainable ecosystem. It will not just be that we’ve gotten to exascale, systems can be anointed as exascale, we breath a sigh of relief and relax. It needs to be sustainable and that’s why we really want systems that are in the vendor’s product line — they’re not something they are building just for us one of a kind. It needs to be part of the business model that they want to follow, and software that is usable by many different applications, which will make it sustainable — open source almost exclusively, which again helps sustainability because many people can then contribute to it and help evolve it beyond exascale.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UT Dallas Grows HPC Storage Footprint for Animation and Game Development

October 28, 2020

Computer-generated animation and video game development are extraordinarily computationally intensive fields, with studios often requiring large server farms with hundreds of terabytes – or even petabytes – of storag Read more…

By Staff report

Frame by Frame, Supercomputing Reveals the Forms of the Coronavirus

October 27, 2020

From the start of the pandemic, supercomputing research has been targeting one particular protein of the coronavirus: the notorious “S” or “spike” protein, which allows the virus to pry its way into human cells a Read more…

By Oliver Peckham

AMD Reports Record Revenue and $35B Deal to Buy Xilinx

October 27, 2020

AMD this morning reported record quarterly revenue of $2.8 billion and a finalized deal to buy FPGA-maker Xilinx for $35 billion in an all-stock transaction. The acquisition helps AMD keep pace during a time of consolida Read more…

By John Russell

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chip maker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the Europe Read more…

By George Leopold

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a reference collection of open-source HPC software components and bes Read more…

By John Russell

AWS Solution Channel

Rapid Chip Design in the Cloud

Time-to-market and engineering efficiency are the most critical and expensive metrics for a chip design company. With this in mind, the team at Annapurna Labs selected Altair AcceleratorRead more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

AMD Reports Record Revenue and $35B Deal to Buy Xilinx

October 27, 2020

AMD this morning reported record quarterly revenue of $2.8 billion and a finalized deal to buy FPGA-maker Xilinx for $35 billion in an all-stock transaction. Th Read more…

By John Russell

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a referen Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Leading Solution Providers

Contributors

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This