Facing a Dark Winter, the COVID-19 HPC Consortium Doubles Down on Triage

By Oliver Peckham

November 16, 2020

In March, IBM, the U.S. Department of Energy and the White House Office of Science and Technology Policy launched an unprecedented initiative: the COVID-19 HPC Consortium. The wide-ranging consortium, aimed at leveraging worldwide supercomputing to fight the coronavirus, has since expanded to include 43 members (many international) and 600 petaflops of computing power (up from 330 in March), allocating those resources to more than 90 projects.

In the last eight months, however, much has changed beyond the size of the consortium: the physical structure of the virus (an emphasis for COVID-focused supercomputing in the spring and summer) is now much better understood; a viable vaccine now appears on track for scaled distribution by the spring of 2021; and after a year of dread, massive spikes in Europe and the U.S. signal that much of the world may indeed be staring down an extraordinarily dark winter.

With these factors in mind, the COVID-19 HPC Consortium has announced that it is entering a “new phase” of its operation: one focused on benefiting patients over the next six months.

A new phase for the consortium

“In just eight months, we’ve brought together an unprecedented scale of computing power to support COVID-19 research, and dozens of projects have already utilized these resources,” said Dario Gil, director of IBM Research. “At this stage, the Consortium partners believe that our combined computing resources now hold the potential to benefit patients in the near-term, as well as offering the potential for longer-term scientific breakthroughs.”  

Specifically, the consortium will support projects working on understanding and modeling patient response to the virus; learning and validating vaccine response models from multiple clinical trials; evaluating combination therapies using repurposed molecules; and designing epidemiological models.

The six-month time frame works out to a mid-November to mid-May window, which aligns with increasing expectations of general vaccine availability in April or May. Last week, Anthony Fauci said that the Pfizer vaccine – which the company reports as having a remarkable 90-percent-plus efficacy – should be available to general populations by “the end of April.” With this window, then, the HPC Consortium seems to be specifically aiming to play triage and reduce losses as much as possible over the course of the winter in anticipation of broad vaccination efforts.

For researchers, there will be no additional call for submissions; everything will proceed as it did in phase one, with one key difference.

“The research proposals have never stopped — they’ll just be evaluated in the context of phase two objectives,” Jamie Thomas, general manager of Strategy and Development for IBM Systems, told HPCwire. “We certainly want to create a seamless environment as we move from phase to phase and not slow down any research that is germane and effective.”

Six months, of course, is a long time in a pandemic, but a short time in the research world. Asked about that ambitious schedule, Thomas pointed to research projects that had used supercomputing to produce results in days, then stressed the now nearly doubled aggregate computing capacity of the consortium. “We would certainly expect that speed would be an element of what we’re able to achieve here,” she said.

Beyond phase two

A “phase two,” of course, begs the question: will there be a “phase three”? And, given the window, would a third phase focus more on vaccine distribution and a post-vaccine world?

“It’s a great question,” Thomas said. “I think we’ll learn from phase two … and then that will inform us about what we need to do next. Certainly, there’s room and ideas around having scientific resources and reserves available on a more ongoing basis.”

Thomas also referred to a letter sent by IBM CEO Arvind Krishna to President-elect Joe Biden last week. In the letter, Krishna advised Biden to “establish a Scientific Readiness Reserve – a body of scientists and computing resources from the private sector that can be swiftly mobilized in times of crisis,” citing the groundwork laid by the COVID-19 HPC Consortium.

The COVID-19 HPC Consortium spans industry members like AWS, AMD, Nvidia and Intel; academic participants like MIT, UT Austin, and CSCS; international agencies and laboratories like KISTI and RIKEN; and the Department of Energy’s national laboratories. The full list is here — but that list might continue to grow.

“As we go through phase two, if others want to participate, I’m sure that we’ll be willing to consider additional members,” Thomas said. “Some of the [participating] countries are newer to this consortium and I’m sure they’re going to bring in other proposed partners – you always see that when you’re creating an ecosystem like this.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in Computing vs. COVID-19: SC20 Edition

November 30, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporations and governments are urgently devoting their computing reso Read more…

By Oliver Peckham

GENCI Supercomputer Simulation Illuminates the Dark Universe

November 30, 2020

What we can see and touch are, in the scheme of the universe, relatively minor components, with visible matter and tangible mass constituting just 16 percent of the universe’s mass and 30 percent of its energy, respect Read more…

By Oliver Peckham

The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh

November 27, 2020

As HPE’s chief technology officer for artificial intelligence, Dr. Eng Lim Goh devotes much of his time talking and consulting with enterprise customers about how AI can benefit their business operations and products. Read more…

By Todd R. Weiss

SC20 Panel – OK, You Hate Storage Tiering. What’s Next Then?

November 25, 2020

Tiering in HPC storage has a bad rep. No one likes it. It complicates things and slows I/O. At least one storage technology newcomer – VAST Data – advocates dumping the whole idea. One large-scale user, NERSC storage architect Glenn Lockwood sort of agrees. The challenge, of course, is that tiering... Read more…

By John Russell

Exscalate4CoV Runs 70 Billion-Molecule Coronavirus Simulation

November 25, 2020

The winds of the pandemic are changing – for better and for worse. Three viable vaccines now teeter on the brink of regulatory approval, which will pave the way for broad distribution by April or May. But until then, COVID-19 cases are skyrocketing across the U.S. and Europe... Read more…

By Oliver Peckham

AWS Solution Channel

Add storage to your high-performance file system with a single click and meet your scalability needs

Many organizations have on-premises, high-performance workloads burdened with complex management and scalability challenges. Scaling data-intensive workloads on-premises typically involves purchasing more hardware, which can slow time to production and require high upfront investment. Read more…

Intel® HPC + AI Pavilion

Intel Keynote Address

Intel is the foundation of HPC – from the workstation to the cloud to the backbone of the Top500. At SC20, Intel’s Trish Damkroger, VP and GM of high performance computing, addresses the audience to show how Intel and its partners are building the future of HPC today, through hardware and software technologies that accelerate the broad deployment of advanced HPC systems. Read more…

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman Institute for Advanced Science and Technology at the Universi Read more…

By Oliver Peckham

The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh

November 27, 2020

As HPE’s chief technology officer for artificial intelligence, Dr. Eng Lim Goh devotes much of his time talking and consulting with enterprise customers about Read more…

By Todd R. Weiss

SC20 Panel – OK, You Hate Storage Tiering. What’s Next Then?

November 25, 2020

Tiering in HPC storage has a bad rep. No one likes it. It complicates things and slows I/O. At least one storage technology newcomer – VAST Data – advocates dumping the whole idea. One large-scale user, NERSC storage architect Glenn Lockwood sort of agrees. The challenge, of course, is that tiering... Read more…

By John Russell

Exscalate4CoV Runs 70 Billion-Molecule Coronavirus Simulation

November 25, 2020

The winds of the pandemic are changing – for better and for worse. Three viable vaccines now teeter on the brink of regulatory approval, which will pave the way for broad distribution by April or May. But until then, COVID-19 cases are skyrocketing across the U.S. and Europe... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

Gordon Bell Prize Winner Breaks Ground in AI-Infused Ab Initio Simulation

November 20, 2020

The race to blend deep learning and first-principle simulation to speed up solutions and scale up problems tackled is one of the most exciting research areas in computational science today. This year’s ACM Gordon Bell Prize winner announced today at SC20 makes significant progress in that direction. Read more…

By John Russell

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

SC20 Keynote: Climate, Exascale & the Ultimate Answer

November 19, 2020

SC20’s keynote was delivered by renowned meteorologist and climatologist Bjorn Stevens, a director at the Max Planck Institute for Meteorology since 2008 and a professor at the University of Hamburg. In his keynote, Stevens traced the history of climate science from its earliest days through... Read more…

By Oliver Peckham

EuroHPC Exec. Dir. Talks Procurement, EPI, and Europe’s Efforts to Control its HPC Destiny

November 19, 2020

While much of the HPC community’s attention is fixed on SC20’s flood of news and new product announcements, Anders Dam Jensen, the newly-minted executive di Read more…

By Steve Conway

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Leading Solution Providers

Contributors

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chipmaker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the European Union begin scrutinizing the impact of the blockbuster deal on semiconductor industry competition and innovation, the deal has at the very least... Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This