Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

By Oliver Peckham

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Gordon Bell Special Prize for High Performance Computing-Based COVID-19 Research. The prize, which was awarded in a ceremony today at the (virtual) SC20 supercomputing conference, recognizes “outstanding research achievement towards the understanding of the COVID-19 pandemic through the use of high-performance computing.” 

Nominations for the prestigious award were selected “based on performance and innovation in their computational methods, in addition to their contributions towards understanding the nature, spread and/or treatment of the disease.” The award is accompanied by a $10,000 prize. The Special Prize for High Performance Computing-Based COVID-19 Research is slated to be awarded in 2021 as well.

The four finalist teams presented virtually at SC20 in advance of the awards ceremony, showcasing the myriad ways in which massive supercomputing has been utilized to provide crucial knowledge around the pandemic and the virus at its core, from atom-by-atom simulations of the viral envelope to person-by-person simulations of major cities.

And the winner is…

Bronis R. de Supinski, chair of the Gordon Bell Prize Committee and CTO for Livermore Computing at Lawrence Livermore National Laboratory (LLNL), took the virtual stage to announce the winning team: a wide-reaching, nationwide collaboration to develop unprecedented simulations of key aspects of the novel coronavirus.

Image courtesy of SC20

AI-Driven Multiscale Simulations Illuminate Mechanisms of SARS-CoV-2 Spike Dynamics

Team: Lorenzo Casalino, Abigail Dommer, Zied Gaieb, Emilia P. Barros, Terra Stzain, Surl-Hee Ahn, Anda Trifan, Alexander Brace, Anthony Bogetti, Heng Ma, Hyungro Lee, Matteo Turilli, Syma Khalid, Lillian Chong, Carlos Simmerling, David Hardy, Julio Maia, James Phillips, Thorsten Kurth, Abraham Stern, Lei Huang, John McCalpin, Mahidhar Tatineni, Tom Gibbs, John Stone, Shantenu Jha, Arvind Ramanathan and Rommie E. Amaro.

The winning team zeroed in on a part of the SARS-CoV-2 virus that has become notorious to anyone following COVID-19 research: the spike protein, which both provides the coronavirus with its namesake crown-like spikes and allows it to infect human cells. The team used Summit (still the second-most powerful publicly ranked supercomputer) to simulate the SARS-CoV-2’s spike protein and viral envelope using 305 million atoms.

The resulting model of SARS-CoV-2. Image courtesy of Rommie Amaro and Lorenzo Casalino.

“Experiments give us a picture of what these things look like, but they can’t tell us the whole story,” said Rommie Amaro, co-lead of the project and professor and endowed chair of chemistry and biochemistry at the University of California San Diego. “The only way we can do this is through simulations, and right now we are pushing the capabilities of molecular simulations to the limits of the computer architectures that we have on this earth. This is at the edge of possibilities of what people are capable of doing.”


“We are giving people never-before-seen, intimate views of this virus, with resolution that is impossible to achieve experimentally.”


“We are giving people never-before-seen, intimate views of this virus, with resolution that is impossible to achieve experimentally,” she added. “Why we care about this is because if we want to understand how the virus infects the host cell, if we want to be able to design antibodies and new drugs to block and cure infection, if we want to be able to design new therapeutics, this information at this very fine resolution at the atomic level is required.”

To achieve the massive simulation, the team optimized and scaled the Nanoscale Molecular Dynamics (NAMD) code across Summit, a feat made possible through extensive work on other supercomputers, including Frontera, Comet and ThetaGPU. The results illuminated the virus’ sugary glycan shield – which protects it from many pharmaceutical attack strategies – and highlighted the critical role of the virus’ receptor binding domain.


An incredible slate of finalists

Though the SARS-CoV-2 simulations took home the prize at the end of the day, the entire field of finalists illustrated the astonishing work that the HPC community has put into ending the pandemic. Keep reading to learn more about the other three finalist teams.

High-Throughput Virtual Laboratory for Drug Discovery Using Massive Datasets

Team: Jens Glaser, Josh V. Vermaas, David M. Rogers, Jeff Larkin, Scott LeGrand, Swen Boehm, Matthew B. Baker, Aaron Scheinberg, Andreas F. Tillack, Mathialakan Thavappiragasam, Ada Sedova and Oscar Hernandez.

Another ORNL-based team also used Summit – this time, to screen more than a billion compounds for their ability to bind with two different structures of SARS-CoV-2’s main protease… and completing each of those screenings in under 24 hours. 

To achieve those remarkable results, the team scaled AutoDock-GPU to 27,612 of Summit’s Nvidia V100 GPUs, ending up with a 350-fold speedup compared to the CPU version of the same code. The researchers faced an uphill battle on this front, as very few molecular docking codes have used GPUs, and fewer still are well-supported – let alone open-source. The researchers worked with Nvidia to create a CUDA version of the code for high-throughput analysis.


“When we were using Summit, we were docking 20,000 compounds a second.”


“When we were using Summit, we were docking 20,000 compounds a second,” said Ada Sedova, a biophysicist in the Molecular Biophysics Group within ORNL’s Biosciences Division and co-lead of the project. “We have done this in 24 hours with full optimization of these poses, the way people would normally do at the small scale. To be able to do this on a billion compounds would have taken months on even the largest academic clusters without the optimizations of AutoDock-GPU for Summit.”

An illustration of how small molecules can occupy spaces in SARS-CoV-2’s viral proteins. Image courtesy of Joshua Vermaas.

“We think that the rapid response to the COVID-19 pandemic that we stood up on Summit is essential to developing a forward-looking computational capability for future global health crises,” added Jens Glaser, a computational scientist at ORNL and another co-lead of the project. “Importantly, the speedup was realized end-to-end and contains necessary machine learning and data analytics components, and that allows us to incorporate feedback from experiments into the machine learning models and converge onto predictions and more potent inhibitors.”

A Population Data-Driven Workflow for COVID-19 Modeling and Learning

Team: Jonathan Ozik, Justin M. Wozniak, Nicholson Collier, Charles M. Macal and Mickael Binois.

A finalist team led by Argonne National Laboratory, meanwhile, used supercomputing for epidemiological analysis. Using Argonne’s Theta supercomputer (39th on the most recent Top500), the team modeled how COVID-19 spreads through populations using a city-scale representation of Chicago. The simulated Windy City was populated by 2.7 million digital individuals traveling among 1.2 million locations. The model was optimized to simultaneously run on more than 800 of Theta’s nodes.

Mobility patterns generated by the CityCOVID model. Image courtesy of Argonne.

“In ChiSIM [the Chicago Social Interaction Model], we represent every person in the city of Chicago as an individual, including their socioeconomic and demographic variables, their activities and the places they visit – schools and workplaces, for example – in the course of those activities,” explained Nicholson Collier, a senior software engineer at Argonne. “As the agents follow their activity schedules, they become colocated with other agents in a place and interact with them, leading to trillions of interactions over the course of the simulation.”


“… trillions of interactions over the course of the simulation.”


“With this model, you have potentially many people interacting in many different ways: some might be infected, some might be susceptible, and they mix in different proportions in a variety of different locations – there are different locations like schools and workplaces where very different parts of the population interface,” said Jonathan Ozik, an Argonne computational scientist and co-lead of the project. ​“The multitude of possibilities the model presents make it quite qualitatively different from – and quantitatively more complex than – a statistical model or more simplified compartmental models, which are much faster to run.”

Throughout the pandemic, results from CityCOVID have been used to inform stakeholders and decision-makers, particularly in Chicago and the state of Illinois.

Enabling Rapid COVID-19 Small Molecule Drug Design Through Scalable Deep Learning of Generative Models

Team: Sam Ade Jacobs, Tim Moon, Kevin McLoughlin, William D. Jones, David Hysom, Dong H. Ahn, John Gyllenhaal, Pythagoras Watson, Felice C. Lightstone, Jonathan E. Allen, Ian Karlin and Brian Van Essen.

Another finalist team hailed from LLNL, where researchers used Sierra (which recently defended its title as the 3rd most powerful publicly ranked supercomputer) to create an accurate, efficient generative model for producing novel compounds with the potential to treat COVID-19. After training the model on over 1.6 billion small molecule compounds, the team reduced the training time from a day to just 23 minutes.

“Drug design is both costly in time and effort,” said Brian Van Essen, a computer scientist and leader of the Informatics Group at LLNL. “It’s normally a 15-year process to bring a new therapeutic from discovery all the way through FDA review.” The goal, he said, was to greatly condense the time frame of the first two trial phases, but also reduce the high risk of failure in phase three trials.

The pharmaceutical pipeline. Image courtesy of the researchers.

“Our globally asynchronous multi-level parallel training approach strong scales to all of Sierra with up to 97.7 percent efficiency,” the researchers wrote, adding that they achieved 318 petaflops for 17.1 percent of half-precision peak using tensor cores. The researchers say that their model can be used to create an automated “self-learning design loop” for drug discovery, even with much less impressive computing resources than Sierra.


“This ability to quickly create high-quality machine learning models changes the time-to-insight from a compute-limited issue to a human-limited one.”


“This capability will have a dramatic impact on drug discovery,” said Ian Karlin, an LLNL computer scientist who co-authored the paper. “This ability to quickly create high-quality machine learning models changes the time-to-insight from a compute-limited issue to a human-limited one.”

Next, the researchers want to improve the scaling even further, train using more types of models, increase automation and improve overall efficiency.

And also…

Don’t forget to check our coverage of the winners and finalists for the 2020 ACM Gordon Bell Prize.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Rockport Networks Launches 300 Gbps Switchless Fabric, Reveals 396-Node Deployment at TACC

October 27, 2021

Rockport Networks emerged from stealth this week with the launch of its 300 Gbps switchless networking architecture focused on the needs of the high-performance computing and the advanced-scale AI market. Early customers Read more…

AWS Adds Gaudi-Powered, ML-Optimized EC2 DL1 Instances, Now in GA

October 27, 2021

As machine learning becomes a dominating use case for local and cloud computing, companies are racing to provide solutions specifically optimized and accelerated for AI applications. Now, Amazon Web Services (AWS) is int Read more…

Fireside Chat with LBNL’s Advanced Quantum Testbed Director

October 26, 2021

Last week, Irfan Siddiqi led a “fireside chat” with a few media and analysts to introduce the Department of Energy’s relatively new Advanced Quantum Testbed (AQT), which is based at Lawrence Berkeley National Labor Read more…

Graphcore Introduces Larger-Than-Ever IPU-Based Pods

October 22, 2021

After launching its second-generation intelligence processing units (IPUs) in 2020, four years after emerging from stealth, Graphcore is now boosting its product line with its largest commercially-available IPU-based sys Read more…

Quantum Chemistry Project to Be Among the First on EuroHPC’s LUMI System

October 22, 2021

Finland’s CSC has just installed the first module of LUMI, a 550-peak petaflops system supported by the European Union’s EuroHPC Joint Undertaking. While LUMI -- pictured in the header -- isn’t slated to complete i Read more…

AWS Solution Channel

Royalty-free stock illustration ID: 577238446

Putting bitrates into perspective

Recently, we talked about the advances NICE DCV has made to push pixels from cloud-hosted desktops or applications over the internet even more efficiently than before. Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Rockport Networks Launches 300 Gbps Switchless Fabric, Reveals 396-Node Deployment at TACC

October 27, 2021

Rockport Networks emerged from stealth this week with the launch of its 300 Gbps switchless networking architecture focused on the needs of the high-performance Read more…

AWS Adds Gaudi-Powered, ML-Optimized EC2 DL1 Instances, Now in GA

October 27, 2021

As machine learning becomes a dominating use case for local and cloud computing, companies are racing to provide solutions specifically optimized and accelerate Read more…

Fireside Chat with LBNL’s Advanced Quantum Testbed Director

October 26, 2021

Last week, Irfan Siddiqi led a “fireside chat” with a few media and analysts to introduce the Department of Energy’s relatively new Advanced Quantum Testb Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

LLNL Prepares the Water and Power Infrastructure for El Capitan

October 21, 2021

When it’s (ostensibly) ready in early 2023, El Capitan is expected to deliver in excess of two exaflops of peak computing power – around four times the powe Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make i Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire