DOE COVID Consortium Drives Faster, More Collaborative Science

By Tiffany Trader

May 7, 2020

The U.S. Department of Energy is leading the charge against the coronavirus pandemic in the United States by driving collaborative science toward a common goal. Further details about the DOE lab efforts, including the COVID-19 HPC Consortium (covered here), came to light this week at CERAWeek, a major energy-focused conference.

In a video presentation, Carlos Pascual (senior vice president, IHS Markit) interviews Dr. Thomas Zacharia, director of Oak Ridge National Laboratory, and Undersecretary of Energy for Science Paul Dabbar, who discuss the contributions of the National Lab complex, including how resources are being used in the context of molecular modeling, epidemiological research and the search for genomic clues.

Reviewing how the consortium was conceived with IBM, and citing some of the aligned companies who have joined (HPE, AWS, Microsoft, Nvidia, Google, and many others), Dabbar said it is the biggest tech industry alliance focused on a national emergency since World War II. “It’s very exciting to have such a broad group including a number of competitors in the private sector that normally compete against each other all coming together.”

From left (clockwise): Thomas Zacharia, director of Oak Ridge National Laboratory; Undersecretary of Energy for Science Paul Dabbar; Carlos Pascual, senior vice president, IHS Markit

“This truly is a consortium,” said Zacharia, “in both the private sector and the National Labs. We are 17 national laboratories that work as a system. The Department of Energy has stood up the National Virtual Biotechnology Laboratory (NVBL) that takes in all the proposals…. A sub-team is focused on utilizing the specific high performance computing resources, and this capability is made available on a priority basis to COVID-19 activities.”

In the case of Summit, the world’s top-ranked supercomputer at 149-petaflops, which Zacharia oversees, the effort offers a new level of compute capability. “There’s a group trying to ingest a billion-compound dataset,” he said, “so that they can actually look at the efficacy of these compounds for drug therapeutics development, and this is the first three-dimensional kind of calculation that they would have ever performed for something like this.

“One aspect of [the consortium] is bringing the capabilities. Equally important is the human dimension,” said Zacharia. “I’ve had a chance to talk with some of the scientists who are working on this and they say that this pandemic has actually changed how science is done. It’s become much more collaborative both internal to the laboratory, within the laboratory system, but also across agencies and industry. So it’s actually truly a testament of mobilizing the best capabilities, people, program resources and capabilities together to defeat this virus.”

Dabbar discussed the DOE’s long history of work in biology, dating back to the Manhattan Project and subsequent work on nuclear radiation damage, and then more recently the Human Genome Project that evolved out of Lawrence Livermore National Lab and was ultimately executed at the University of California Berkeley.

“The big four areas that we focus on from a broad science point of view with our various missions are materials, chemistry, physics and biology. What’s happened over the decades is that we do research on a number of different areas and we build out the capacity, and that can be switched back and forth when we build facilities.

“We were able to build off of that everyday work that we do across the complex in computing, imaging cells, working with biotech drug companies, working with researchers, and it was very natural to shift from from that broad set of capabilities, moving more under biology in the near term. Moving into biology under this public health crisis came very naturally, and we were able to do it quite quickly.”

Much of the computing power is going to a well known use case, modeling the interactions of the coronavirus’ spikey crown.

“By now almost everybody has seen the spike protease of the corona, the corona of the coronavirus,” said Zacharia, “and that is used as a particularly efficient target for looking at compounds to attach, how a chemical compound, in this case a drug, would [connect] with the protease in order to defeat this virus. And this particular study … is trying to run the analysis, screen these compounds, as to how effectively it will dock to this protease, just one of the 10 or so proteins they are looking at to see how the target will lock with the compounds in order to see what the effect is.”

Zacharia emphasized that computing is just one part of the approach. “Simultaneously, [scientists] are using the light sources and the neutron sources and the department has various laboratories to experimentally validate how these compounds actually work in real experiments so computing is driving experiments and experiments in turn is driving forward the computational capabilities to accelerate the sort of rational design of jobs.”

The DOE is also contributing to epidemiological understanding. Dabbar discusses a project out of Argonne and Fermi national labs, working with the University of Chicago, that is modeling of the virus’ spread in Chicago. The project guides policy advice on practical measures to reduce the potential spread of the virus, according to Dabbar.

Beyond the environmental, the DOE is pursuing genomic indicators of individual risk factors. “There’s the possibility of there being a specific genomic marker or sequence that might predispose someone to catching it, or having more serious impact from it,” said Dabbar. “We’re not quite certain of that, … we certainly think there’s a possibility of it. And the only way that we can do that is to collect enough patient data in terms of who’s passed away, who’s recovered, what are their environmental backgrounds, and then also collect the genomic data for enough patients — that’s a bit harder. We need to try to get a big enough universe sample of people and try to see if there’s any trends — once again coming back to the computing aspects of the department — to see if there’s a particular pattern associated with a sequence that would aim someone towards being of higher risk. We don’t know that’s necessarily the case, but we definitely have one of the research clinics to collect patient data on that to try to see if there is that would point us toward a therapy also as a possibility.”

In closing, Pascual asked about international engagement and the data collection process. Dabbar said discussions are in play with the UK, Japan and Canada, among others to assist in aggregating and analyzing data.

“We’ve been in in touch with and have interest from both the UK and Japan in particular,” Dabbar said, “either coordinating or even possibly joining the US Consortium to help do that aggregation and joint allocation of the research on a rapid basis. From a data point of view, the pulling together of patient data. The more patient data that we have, we’ll see better trends in what works and doesn’t work, and guide clinicians [to treat] individual patients. We’ve started conversations with different labs and different groups with both Canada and the UK, and hopefully we’ll be able to work with them. It takes a bit of an effort with patient data and what’s confidential and what can be aggregated versus individual and meet all the requirements. We actually have a full group working on this across the federal footprint of all the different agencies to help break down the barriers to collect that data, and we’re certainly open to other countries who are open to sharing data back and forth and trying to figure out the patterns to do the best treatment.”

Watch the full interview here: https://ceraweek.com/conversations/index.html?videoid=6154440879001

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Multiverse Targets ‘Quantum Computing for the Masses’

January 19, 2022

The race to deliver quantum computing solutions that shield users from the underlying complexity of quantum computing is heating up quickly. One example is Multiverse Computing, a European company, which today launched the second financial services product in its Singularity product group. The new offering, Fair Price, “delivers a higher accuracy in fair price calculations for financial... Read more…

Students at SC21: Out in Front, Alongside and Behind the Scenes

January 19, 2022

The Supercomputing Conference (SC) is one of the biggest international conferences dedicated to high-performance computing, networking, storage and analysis. SC21 was a true ‘hybrid’ conference, with a total of 380 o Read more…

New Algorithm Overcomes Hurdle in Fusion Energy Simulation

January 15, 2022

The exascale era has brought with it a bevy of fusion energy simulation projects, aiming to stabilize the notoriously delicate—and so far, unmastered—clean energy source that would transform the world virtually overn Read more…

Summit Powers Novel Protein Function Prediction Work

January 13, 2022

There are hundreds of millions of sequenced proteins and counting—but only 170,000 have had their structures solved by researchers, bottlenecking our understanding of proteins and their functions across organisms’ ge Read more…

Q-Ctrl – Tackling Quantum Hardware’s Noise Problems with Software

January 13, 2022

Implementing effective error mitigation and correction is a critical next step in advancing quantum computing. While a lot of attention has been given to efforts to improve the underlying ‘noisy’ hardware, there's be Read more…

AWS Solution Channel

shutterstock 377963800

New – Amazon EC2 Hpc6a Instance Optimized for High Performance Computing

High Performance Computing (HPC) allows scientists and engineers to solve complex, compute-intensive problems such as computational fluid dynamics (CFD), weather forecasting, and genomics. Read more…

Nvidia Defends Arm Acquisition Deal: a ‘Once-in-a-Generation Opportunity’

January 13, 2022

GPU-maker Nvidia is continuing to try to keep its proposed acquisition of British chip IP vendor Arm Ltd. alive, despite continuing concerns from several governments around the world. In its latest action, Nvidia filed a 29-page response to the U.K. government to point out a list of potential benefits of the proposed $40 billion deal. Read more…

Multiverse Targets ‘Quantum Computing for the Masses’

January 19, 2022

The race to deliver quantum computing solutions that shield users from the underlying complexity of quantum computing is heating up quickly. One example is Multiverse Computing, a European company, which today launched the second financial services product in its Singularity product group. The new offering, Fair Price, “delivers a higher accuracy in fair price calculations for financial... Read more…

Students at SC21: Out in Front, Alongside and Behind the Scenes

January 19, 2022

The Supercomputing Conference (SC) is one of the biggest international conferences dedicated to high-performance computing, networking, storage and analysis. SC Read more…

Q-Ctrl – Tackling Quantum Hardware’s Noise Problems with Software

January 13, 2022

Implementing effective error mitigation and correction is a critical next step in advancing quantum computing. While a lot of attention has been given to effort Read more…

Nvidia Defends Arm Acquisition Deal: a ‘Once-in-a-Generation Opportunity’

January 13, 2022

GPU-maker Nvidia is continuing to try to keep its proposed acquisition of British chip IP vendor Arm Ltd. alive, despite continuing concerns from several governments around the world. In its latest action, Nvidia filed a 29-page response to the U.K. government to point out a list of potential benefits of the proposed $40 billion deal. Read more…

Nvidia Buys HPC Cluster Management Company Bright Computing

January 10, 2022

Graphics chip powerhouse Nvidia today announced that it has acquired HPC cluster management company Bright Computing for an undisclosed sum. Unlike Nvidia’s bid to purchase semiconductor IP company Arm, which has been stymied by regulatory challenges, the Bright deal is a straightforward acquisition that aims to expand... Read more…

SC21 Panel on Programming Models – Tackling Data Movement, DSLs, More

January 6, 2022

How will programming future systems differ from current practice? This is an ever-present question in computing. Yet it has, perhaps, never been more pressing g Read more…

Edge to Exascale: A Trend to Watch in 2022

January 5, 2022

Edge computing is an approach in which the data is processed and analyzed at the point of origin – the place where the data is generated. This is done to make data more accessible to end-point devices, or users, and to reduce the response time for data requests. HPC-class computing and networking technologies are critical to many edge use cases, and the intersection of HPC and ‘edge’ promises to be a hot topic in 2022. Read more…

Citing ‘Shortfalls,’ NOAA Targets Hundred-Fold HPC Increase Over Next Decade

January 5, 2022

From upgrading the Global Forecast System (GFS) to acquiring new supercomputers, the National Oceanic and Atmospheric Administration (NOAA) has been making big moves in the HPC sphere over the last few years—but now it’s setting the bar even higher. In a new report, NOAA’s Science Advisory Board (SAB) highlighted... Read more…

IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits?

November 3, 2021

On October 1 of this year, IonQ became the first pure-play quantum computing start-up to go public. At this writing, the stock (NYSE: IONQ) was around $15 and its market capitalization was roughly $2.89 billion. Co-founder and chief scientist Chris Monroe says it was fun to have a few of the company’s roughly 100 employees travel to New York to ring the opening bell of the New York Stock... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

AMD Launches Milan-X CPU with 3D V-Cache and Multichip Instinct MI200 GPU

November 8, 2021

At a virtual event this morning, AMD CEO Lisa Su unveiled the company’s latest and much-anticipated server products: the new Milan-X CPU, which leverages AMD’s new 3D V-Cache technology; and its new Instinct MI200 GPU, which provides up to 220 compute units across two Infinity Fabric-connected dies, delivering an astounding 47.9 peak double-precision teraflops. “We're in a high-performance computing megacycle, driven by the growing need to deploy additional compute performance... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Nvidia Buys HPC Cluster Management Company Bright Computing

January 10, 2022

Graphics chip powerhouse Nvidia today announced that it has acquired HPC cluster management company Bright Computing for an undisclosed sum. Unlike Nvidia’s bid to purchase semiconductor IP company Arm, which has been stymied by regulatory challenges, the Bright deal is a straightforward acquisition that aims to expand... Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed

November 24, 2021

Details about two previously rumored Chinese exascale systems came to light during last week’s SC21 proceedings. Asked about these systems during the Top500 media briefing on Monday, Nov. 15, list author and co-founder Jack Dongarra indicated he was aware of some very impressive results, but withheld comment when asked directly if he had... Read more…

Leading Solution Providers

Contributors

Lessons from LLVM: An SC21 Fireside Chat with Chris Lattner

December 27, 2021

Today, the LLVM compiler infrastructure world is essentially inescapable in HPC. But back in the 2000 timeframe, LLVM (low level virtual machine) was just getting its start as a new way of thinking about how to overcome shortcomings in the Java Virtual Machine. At the time, Chris Lattner was a graduate student of... Read more…

2021 Gordon Bell Prize Goes to Exascale-Powered Quantum Supremacy Challenge

November 18, 2021

Today at the hybrid virtual/in-person SC21 conference, the organizers announced the winners of the 2021 ACM Gordon Bell Prize: a team of Chinese researchers leveraging the new exascale Sunway system to simulate quantum circuits. The Gordon Bell Prize, which comes with an award of $10,000 courtesy of HPC pioneer Gordon Bell, is awarded annually... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Three Universities Team for NSF-Funded ‘ACES’ Reconfigurable Supercomputer Prototype

September 23, 2021

As Moore’s law slows, HPC developers are increasingly looking for speed gains in specialized code and specialized hardware – but this specialization, in turn, can make testing and deploying code trickier than ever. Now, researchers from Texas A&M University, the University of Illinois at Urbana... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Top500: No Exascale, Fugaku Still Reigns, Polaris Debuts at #12

November 15, 2021

No exascale for you* -- at least, not within the High-Performance Linpack (HPL) territory of the latest Top500 list, issued today from the 33rd annual Supercomputing Conference (SC21), held in-person in St. Louis, Mo., and virtually, from Nov. 14–19. "We were hoping to have the first exascale system on this list but that didn’t happen," said Top500 co-author... Read more…

TACC Unveils Lonestar6 Supercomputer

November 1, 2021

The Texas Advanced Computing Center (TACC) is unveiling its latest supercomputer: Lonestar6, a three peak petaflops Dell system aimed at supporting researchers Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire