Kathy Yelick Charts the Promise and Progress of Exascale Science

By Tiffany Trader

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakthrough Science at the Exascale” at the ACM Europe Conference in Barcelona. In conjunction with her presentation, Yelick agreed to a short Q&A discussion with HPCwire.

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakthrough Science at the Exascale” at the ACM Europe Conference in Barcelona. In conjunction with her presentation, Yelick agreed to a short Q&A discussion with HPCwire.

The timing of Yelick’s talk is timely as one year ago, on Sept. 7, 2016,  the U.S. Department of Energy made the first in a series of announcements about funding support for various components of the Exascale Computing Program, or ECP. The ECP was established to develop the exascale applications, system software, and hardware innovations necessary to enable the delivery of capable exascale systems.

Yelick is the Associate Laboratory Director for Computing Sciences, which includes the National Energy Research Scientific Computing Center (NERSC), the Energy Sciences Network (ESnet) and the Computational Research Division, which does research in applied mathematics, computer science, data science, and computational science. Yelick is also a professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley. Her research is in parallel programming languages, compilers, algorithms and automatic performance tuning. Yelick was director of NERSC from 2008 to 2012. She was recently elected to the National Academy of Engineering (NAE) and the American Association of Arts and Sciences, and is an ACM Fellow and recipient of the ACM/IEEE Ken Kennedy and Athena awards.

HPCwire: What scientific applications necessitate the development of exascale?

Kathy Yelick: There are more than 20 ECP applications that, broadly speaking, fall into the areas of national security, energy, the environment, manufacturing, infrastructure, healthcare and scientific discovery. Associated with each is an exascale challenge problem—something that requires around 50 times the computational power of current systems. They include a diverse set of problems such as a 100-year simulation of the integrity of fields where petroleum is extracted or energy waste is stored; a predictive simulation of an urban area that includes buildings, water quality and electricity demands; and detailed simulations of the universe to better explain and interpret the latest observational data. There are also applications analyzing data at an unprecedented scale, from the newest light sources to complex environmental genomes, and cancer research data that includes patient genetics, tumor genomes, molecular simulations and clinical data.

These applications will help us develop cleaner energy, improve the resilience of our infrastructure, develop materials for extreme environments, adapt to changes in the water cycle, understand the origin of elements in the universe, and develop smaller, more powerful accelerators for use in medicine and industry. And as a California resident, I’m interested in the work to better assess the risks posed by earthquakes.

These projects are not simply scaling or porting old codes to new machines, but each of represents a new predictive or analytic capability. Several are completely new to high performance computing, and others add new capabilities to existing codes, integrating new physical models that are often at widely different space or time scales than the original code.

HPCwire: How do you respond to concerns that exascale programs are too focused on the hardware or will only benefit so-called hero codes?

Yelick: That’s an interesting statement in that ECP is currently committed to funding over $200 million this year to support applications development, software and hardware R&D in partnerships with vendors. There will be substantial machine acquisitions outside the project, but the project itself is directed at these other parts of the ecosystem. As I noted earlier, the application portfolio is not directed at a few hero codes, but represents a broad range of applications from both traditional and non-traditional HPC problem domains.

The NERSC facility is not slated to get one of the first exascale systems, but we expect to provide such a capability a few of years later with the NERSC-10 acquisition. Similarly, NSF is planning a leadership scale acquisition in roughly the same time frame, which should also benefit from the ECP investments. The investments made now in exascale R&D and software will benefit all exascale systems, and lessons learned on the initial applications will inform other teams. NERSC has experience going back to the introduction of massive parallelism in helping the community make such a transition and has already started preparing the user community through its NERSC Exascale Science Applications Program, NESAP. NESAP has 20 user code teams, some of which overlap with the ECP applications, partnered with NERSC and the vendors to prepare their codes for exascale.

HPCwire: What is your perspective on the progress that is being made toward exascale, given the challenges (power, concurrency, fault-tolerance, applications)?

Yelick: We are making great progress in our applications, which were the subject of a recent internal project review. Several of the application teams have found new levels of concurrency and memory optimizations to deal with the most recent DOE HPC system, the NERSC Cori machine with its 68-core nodes and high-bandwidth memory. Much of the ECP software and programming technology can be leveraged across multiple applications, both within ECP and beyond. For example, the Adaptive Mesh Refinement Co-Design Center (AMReX) which was launched last November is releasing its new framework to support the development of block-structured AMR algorithms at the end of September. At least five of the ECP application projects are using AMR, allowing them to efficiently simulate fine-resolution features.

Some of the R&D projects are also getting a better handle on the type of failures that will be important in practice. The hardware R&D on processor and memory designs have made great strides in reducing total system power, but it remains a challenge, and the resulting architecture innovations continue to raise software challenges for the rest of the team. Overall, we’re seeing the benefit of collaborations across the different parts of the project, incorporation of previous research results, and the need for even tighter integration across these parts.

HPCwire: There’s an expectation that exascale supercomputers will need to support simulation, big data and machine learning workloads, which currently have distinct software stacks. What are your thoughts on this challenge? Will container technology be helpful?

Yelick: Containers can certainly help support a variety of software stacks, including today’s analytics stack, and NERSC’s Shifter technology has helped bring this to its HPC systems. But I think we’ll also see new software developed for machine learning to achieve much higher performance levels and move them over to lighter-weight software. Porting Spark or TensorFlow to an exascale system will bring new user communities, but may not produce the most efficient use of these machines.

It’s somewhat ironic that training for deep learning probably has more similarity to the HPL benchmark than many of the simulations that are run today, although requirements for numerical precision are different and likely to lead to some architectural divergence. The algorithms in this space are evolving rapidly and projects like CAMERA (the Center for Advanced Mathematics for Energy Research Applications) are developing methods for analyzing data from some of the large DOE experimental facilities. Some of our policies around use of HPC need to change to better fit data workloads, both to handle on-demand computing for real-time data streams and to address the long-term needs for data provenance and sharing. The idea of receiving HPC allocations for a year at a time, and having jobs that sit in queues, will not work for these problems. NERSC is exploring all of these topics, such as with their recent 15-petaflop deep learning run described in a paper [and covered by HPCwire] by a team from NERSC, Intel and Stanford; a pilot for real-time job queues; automated metadata analysis through machine learning; and their NESAP for Data partnerships.

HPCwire: Speaking of machine learning and adapting codes to exascale, you’re the PI for the ECP applications project “Exascale Solutions for Microbiome Analysis,’ which also involves Los Alamos National Lab and DOE’s Joint Genome Institute. Can you tell us more about that project and how you’re tailoring Meraculous for exascale systems?

Yelick: The ExaBiome project is developing scalable methods for genome analysis, especially the analysis of microorganisms, which are central players in the environment, food production and human health. They occur naturally as “microbiomes,” cooperative communities of microbes, which means that sequencing an environmental sample produces a metagenome with thousands or even millions of individual species mixed together. Many of the species cannot be cultured in a lab and may never have been seen before—JGI researchers have even discovered new life forms from such analyses. To help understand the function of various genes, Aydin Buluc and Ariful Azad in the Computational Research Division have developed a new high performance clustering algorithm called HipMCL. Such bioinformatics analysis has often been viewed as requiring shared memory machines with large memory, but we have found that using clever parallel algorithms and HPC systems with low-latency interconnects and lightweight communication, we can scale these algorithms to run across petascale systems.

The algorithms are very different than most physical simulations because they involve graph walks, hash tables and highly unstructured sparse matrices. The de novo metagenome assembly challenge is to construct the individual genomes from the mixture of fragments produced by sequencers; it is based on an assembler called Meraculous, developed by Dan Rokhsar’s group at JGI and UC Berkeley. As part of the ExaBiome project we’ve built a scalable implementation extended to handle metagenomes called MetaHipMer (Metagenome High Performance Meraculous). These tools will enable the analysis of very complex environmental samples, and analysis over time, to understand how the microbial community changes with the rest of the environment and influences that environment.

The algorithms also reflect an important workload for future exascale machines. As described in our recent EuroPar 2017 paper, they require fine-grained communication and therefore can take advantage of high injection rates, low latency and remote atomic operations (e.g., remotely incrementing a counter) in the networks. The computation is entirely dominated by these operations and local string alignment algorithms, so there’s no floating point in the entire application. It’s important that we keep all of these workloads in mind as we push towards exasacle, to ensure the machines are capable of graph problems, bioinformatics and other highly irregular computational patterns that may be of interest outside of science and engineering communities.

HPCwire: What are some of the other key points from your talk that you’d like to share with our readers?

Yelick: First, the science breakthroughs from exascale programs will rely not just on faster machines, but also on the development of new application capabilities that build on prior research in mathematics, computer science and data science. We need to keep this research pipeline engaged over the next few years, so that we continue to have a vibrant research community to produce the critical methods and techniques that we will need to solve computational and data science challenges beyond exascale.

In that same vein, we shouldn’t think of exascale as an end goal, but rather as another point in the continuum of scientific computing. While much of DOE’s computing effort is currently devoted to exascale, we are already looking beyond to specialized digital architectures, quantum and neuromorphic computing, and new models of scientific investigation and collaboration for addressing future challenges.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in Computing vs. COVID-19: SC20 Edition

November 30, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporations and governments are urgently devoting their computing reso Read more…

By Oliver Peckham

GENCI Supercomputer Simulation Illuminates the Dark Universe

November 30, 2020

What we can see and touch are, in the scheme of the universe, relatively minor components, with visible matter and tangible mass constituting just 16 percent of the universe’s mass and 30 percent of its energy, respect Read more…

By Oliver Peckham

The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh

November 27, 2020

As HPE’s chief technology officer for artificial intelligence, Dr. Eng Lim Goh devotes much of his time talking and consulting with enterprise customers about how AI can benefit their business operations and products. Read more…

By Todd R. Weiss

SC20 Panel – OK, You Hate Storage Tiering. What’s Next Then?

November 25, 2020

Tiering in HPC storage has a bad rep. No one likes it. It complicates things and slows I/O. At least one storage technology newcomer – VAST Data – advocates dumping the whole idea. One large-scale user, NERSC storage architect Glenn Lockwood sort of agrees. The challenge, of course, is that tiering... Read more…

By John Russell

Exscalate4CoV Runs 70 Billion-Molecule Coronavirus Simulation

November 25, 2020

The winds of the pandemic are changing – for better and for worse. Three viable vaccines now teeter on the brink of regulatory approval, which will pave the way for broad distribution by April or May. But until then, COVID-19 cases are skyrocketing across the U.S. and Europe... Read more…

By Oliver Peckham

AWS Solution Channel

Add storage to your high-performance file system with a single click and meet your scalability needs

Many organizations have on-premises, high-performance workloads burdened with complex management and scalability challenges. Scaling data-intensive workloads on-premises typically involves purchasing more hardware, which can slow time to production and require high upfront investment. Read more…

Intel® HPC + AI Pavilion

Intel Keynote Address

Intel is the foundation of HPC – from the workstation to the cloud to the backbone of the Top500. At SC20, Intel’s Trish Damkroger, VP and GM of high performance computing, addresses the audience to show how Intel and its partners are building the future of HPC today, through hardware and software technologies that accelerate the broad deployment of advanced HPC systems. Read more…

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman Institute for Advanced Science and Technology at the Universi Read more…

By Oliver Peckham

The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh

November 27, 2020

As HPE’s chief technology officer for artificial intelligence, Dr. Eng Lim Goh devotes much of his time talking and consulting with enterprise customers about Read more…

By Todd R. Weiss

SC20 Panel – OK, You Hate Storage Tiering. What’s Next Then?

November 25, 2020

Tiering in HPC storage has a bad rep. No one likes it. It complicates things and slows I/O. At least one storage technology newcomer – VAST Data – advocates dumping the whole idea. One large-scale user, NERSC storage architect Glenn Lockwood sort of agrees. The challenge, of course, is that tiering... Read more…

By John Russell

Exscalate4CoV Runs 70 Billion-Molecule Coronavirus Simulation

November 25, 2020

The winds of the pandemic are changing – for better and for worse. Three viable vaccines now teeter on the brink of regulatory approval, which will pave the way for broad distribution by April or May. But until then, COVID-19 cases are skyrocketing across the U.S. and Europe... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

Gordon Bell Prize Winner Breaks Ground in AI-Infused Ab Initio Simulation

November 20, 2020

The race to blend deep learning and first-principle simulation to speed up solutions and scale up problems tackled is one of the most exciting research areas in computational science today. This year’s ACM Gordon Bell Prize winner announced today at SC20 makes significant progress in that direction. Read more…

By John Russell

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

SC20 Keynote: Climate, Exascale & the Ultimate Answer

November 19, 2020

SC20’s keynote was delivered by renowned meteorologist and climatologist Bjorn Stevens, a director at the Max Planck Institute for Meteorology since 2008 and a professor at the University of Hamburg. In his keynote, Stevens traced the history of climate science from its earliest days through... Read more…

By Oliver Peckham

EuroHPC Exec. Dir. Talks Procurement, EPI, and Europe’s Efforts to Control its HPC Destiny

November 19, 2020

While much of the HPC community’s attention is fixed on SC20’s flood of news and new product announcements, Anders Dam Jensen, the newly-minted executive di Read more…

By Steve Conway

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Leading Solution Providers

Contributors

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chipmaker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the European Union begin scrutinizing the impact of the blockbuster deal on semiconductor industry competition and innovation, the deal has at the very least... Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This