Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

By Oliver Peckham

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Gordon Bell Special Prize for High Performance Computing-Based COVID-19 Research. The prize, which was awarded in a ceremony today at the (virtual) SC20 supercomputing conference, recognizes “outstanding research achievement towards the understanding of the COVID-19 pandemic through the use of high-performance computing.” 

Nominations for the prestigious award were selected “based on performance and innovation in their computational methods, in addition to their contributions towards understanding the nature, spread and/or treatment of the disease.” The award is accompanied by a $10,000 prize. The Special Prize for High Performance Computing-Based COVID-19 Research is slated to be awarded in 2021 as well.

The four finalist teams presented virtually at SC20 in advance of the awards ceremony, showcasing the myriad ways in which massive supercomputing has been utilized to provide crucial knowledge around the pandemic and the virus at its core, from atom-by-atom simulations of the viral envelope to person-by-person simulations of major cities.

And the winner is…

Bronis R. de Supinski, chair of the Gordon Bell Prize Committee and CTO for Livermore Computing at Lawrence Livermore National Laboratory (LLNL), took the virtual stage to announce the winning team: a wide-reaching, nationwide collaboration to develop unprecedented simulations of key aspects of the novel coronavirus.

Image courtesy of SC20

AI-Driven Multiscale Simulations Illuminate Mechanisms of SARS-CoV-2 Spike Dynamics

Team: Lorenzo Casalino, Abigail Dommer, Zied Gaieb, Emilia P. Barros, Terra Stzain, Surl-Hee Ahn, Anda Trifan, Alexander Brace, Anthony Bogetti, Heng Ma, Hyungro Lee, Matteo Turilli, Syma Khalid, Lillian Chong, Carlos Simmerling, David Hardy, Julio Maia, James Phillips, Thorsten Kurth, Abraham Stern, Lei Huang, John McCalpin, Mahidhar Tatineni, Tom Gibbs, John Stone, Shantenu Jha, Arvind Ramanathan and Rommie E. Amaro.

The winning team zeroed in on a part of the SARS-CoV-2 virus that has become notorious to anyone following COVID-19 research: the spike protein, which both provides the coronavirus with its namesake crown-like spikes and allows it to infect human cells. The team used Summit (still the second-most powerful publicly ranked supercomputer) to simulate the SARS-CoV-2’s spike protein and viral envelope using 305 million atoms.

The resulting model of SARS-CoV-2. Image courtesy of Rommie Amaro and Lorenzo Casalino.

“Experiments give us a picture of what these things look like, but they can’t tell us the whole story,” said Rommie Amaro, co-lead of the project and professor and endowed chair of chemistry and biochemistry at the University of California San Diego. “The only way we can do this is through simulations, and right now we are pushing the capabilities of molecular simulations to the limits of the computer architectures that we have on this earth. This is at the edge of possibilities of what people are capable of doing.”


“We are giving people never-before-seen, intimate views of this virus, with resolution that is impossible to achieve experimentally.”


“We are giving people never-before-seen, intimate views of this virus, with resolution that is impossible to achieve experimentally,” she added. “Why we care about this is because if we want to understand how the virus infects the host cell, if we want to be able to design antibodies and new drugs to block and cure infection, if we want to be able to design new therapeutics, this information at this very fine resolution at the atomic level is required.”

To achieve the massive simulation, the team optimized and scaled the Nanoscale Molecular Dynamics (NAMD) code across Summit, a feat made possible through extensive work on other supercomputers, including Frontera, Comet and ThetaGPU. The results illuminated the virus’ sugary glycan shield – which protects it from many pharmaceutical attack strategies – and highlighted the critical role of the virus’ receptor binding domain.


An incredible slate of finalists

Though the SARS-CoV-2 simulations took home the prize at the end of the day, the entire field of finalists illustrated the astonishing work that the HPC community has put into ending the pandemic. Keep reading to learn more about the other three finalist teams.

High-Throughput Virtual Laboratory for Drug Discovery Using Massive Datasets

Team: Jens Glaser, Josh V. Vermaas, David M. Rogers, Jeff Larkin, Scott LeGrand, Swen Boehm, Matthew B. Baker, Aaron Scheinberg, Andreas F. Tillack, Mathialakan Thavappiragasam, Ada Sedova and Oscar Hernandez.

Another ORNL-based team also used Summit – this time, to screen more than a billion compounds for their ability to bind with two different structures of SARS-CoV-2’s main protease… and completing each of those screenings in under 24 hours. 

To achieve those remarkable results, the team scaled AutoDock-GPU to 27,612 of Summit’s Nvidia V100 GPUs, ending up with a 350-fold speedup compared to the CPU version of the same code. The researchers faced an uphill battle on this front, as very few molecular docking codes have used GPUs, and fewer still are well-supported – let alone open-source. The researchers worked with Nvidia to create a CUDA version of the code for high-throughput analysis.


“When we were using Summit, we were docking 20,000 compounds a second.”


“When we were using Summit, we were docking 20,000 compounds a second,” said Ada Sedova, a biophysicist in the Molecular Biophysics Group within ORNL’s Biosciences Division and co-lead of the project. “We have done this in 24 hours with full optimization of these poses, the way people would normally do at the small scale. To be able to do this on a billion compounds would have taken months on even the largest academic clusters without the optimizations of AutoDock-GPU for Summit.”

An illustration of how small molecules can occupy spaces in SARS-CoV-2’s viral proteins. Image courtesy of Joshua Vermaas.

“We think that the rapid response to the COVID-19 pandemic that we stood up on Summit is essential to developing a forward-looking computational capability for future global health crises,” added Jens Glaser, a computational scientist at ORNL and another co-lead of the project. “Importantly, the speedup was realized end-to-end and contains necessary machine learning and data analytics components, and that allows us to incorporate feedback from experiments into the machine learning models and converge onto predictions and more potent inhibitors.”

A Population Data-Driven Workflow for COVID-19 Modeling and Learning

Team: Jonathan Ozik, Justin M. Wozniak, Nicholson Collier, Charles M. Macal and Mickael Binois.

A finalist team led by Argonne National Laboratory, meanwhile, used supercomputing for epidemiological analysis. Using Argonne’s Theta supercomputer (39th on the most recent Top500), the team modeled how COVID-19 spreads through populations using a city-scale representation of Chicago. The simulated Windy City was populated by 2.7 million digital individuals traveling among 1.2 million locations. The model was optimized to simultaneously run on more than 800 of Theta’s nodes.

Mobility patterns generated by the CityCOVID model. Image courtesy of Argonne.

“In ChiSIM [the Chicago Social Interaction Model], we represent every person in the city of Chicago as an individual, including their socioeconomic and demographic variables, their activities and the places they visit – schools and workplaces, for example – in the course of those activities,” explained Nicholson Collier, a senior software engineer at Argonne. “As the agents follow their activity schedules, they become colocated with other agents in a place and interact with them, leading to trillions of interactions over the course of the simulation.”


“… trillions of interactions over the course of the simulation.”


“With this model, you have potentially many people interacting in many different ways: some might be infected, some might be susceptible, and they mix in different proportions in a variety of different locations – there are different locations like schools and workplaces where very different parts of the population interface,” said Jonathan Ozik, an Argonne computational scientist and co-lead of the project. ​“The multitude of possibilities the model presents make it quite qualitatively different from – and quantitatively more complex than – a statistical model or more simplified compartmental models, which are much faster to run.”

Throughout the pandemic, results from CityCOVID have been used to inform stakeholders and decision-makers, particularly in Chicago and the state of Illinois.

Enabling Rapid COVID-19 Small Molecule Drug Design Through Scalable Deep Learning of Generative Models

Team: Sam Ade Jacobs, Tim Moon, Kevin McLoughlin, William D. Jones, David Hysom, Dong H. Ahn, John Gyllenhaal, Pythagoras Watson, Felice C. Lightstone, Jonathan E. Allen, Ian Karlin and Brian Van Essen.

Another finalist team hailed from LLNL, where researchers used Sierra (which recently defended its title as the 3rd most powerful publicly ranked supercomputer) to create an accurate, efficient generative model for producing novel compounds with the potential to treat COVID-19. After training the model on over 1.6 billion small molecule compounds, the team reduced the training time from a day to just 23 minutes.

“Drug design is both costly in time and effort,” said Brian Van Essen, a computer scientist and leader of the Informatics Group at LLNL. “It’s normally a 15-year process to bring a new therapeutic from discovery all the way through FDA review.” The goal, he said, was to greatly condense the time frame of the first two trial phases, but also reduce the high risk of failure in phase three trials.

The pharmaceutical pipeline. Image courtesy of the researchers.

“Our globally asynchronous multi-level parallel training approach strong scales to all of Sierra with up to 97.7 percent efficiency,” the researchers wrote, adding that they achieved 318 petaflops for 17.1 percent of half-precision peak using tensor cores. The researchers say that their model can be used to create an automated “self-learning design loop” for drug discovery, even with much less impressive computing resources than Sierra.


“This ability to quickly create high-quality machine learning models changes the time-to-insight from a compute-limited issue to a human-limited one.”


“This capability will have a dramatic impact on drug discovery,” said Ian Karlin, an LLNL computer scientist who co-authored the paper. “This ability to quickly create high-quality machine learning models changes the time-to-insight from a compute-limited issue to a human-limited one.”

Next, the researchers want to improve the scaling even further, train using more types of models, increase automation and improve overall efficiency.

And also…

Don’t forget to check our coverage of the winners and finalists for the 2020 ACM Gordon Bell Prize.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Point. The system includes Intel's research chip called Loihi 2, Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire