Last Thursday, a range of experts joined the Advanced Scale Forum (ASF) in a rapid-fire roundtable to discuss how advanced technologies have transformed the way humanity responded to the COVID-19 pandemic in indelible ways. The roundtable, held near the one-year mark of the first lockdowns in North America, opened with a session from Ari Berman, CEO of BioTeam.
“It’s so easy to focus on the bad things we hear about the remarkable and really unfortunate numbers of people who have died from this, the huge numbers of people who’ve been infected from it, we talk about these new more infectious variants, et cetera,” Berman said – but, he added, there were major success stories in the pandemic, too: collaborations and technology deployments that will save “millions of lives.” (To watch the opening session, click here.)
NIH Keynote: Creating a Coordinated Data Approach to Help Address COVID-19
With that, the roundtable launched into its first keynote, delivered by the National Institutes of Health’s Susan Gregurick, who serves as associate director for data science and director of the NIH’s Office of Data Science Strategy.
“We’ve been working for almost a year now to sprint ahead to collect and enhance SARS-CoV-2 data – clinical data, structural data, genomics data – to address the pandemic,” Gregurick said. “The first thing that we tried to do – and we did successfully – was to get different types of at-home, point-of-care clinical testing technologies out into the hands of our citizens.”
This program – called RADx – ranges from preparing for high-throughput COVID-19 testing to engaging underserved populations through community-engaged implementation projects, and it’s one of several data-driven projects run inside the NIH. The NIH has also, for instance, been working on its Collaboration to Assess Risk and Identify Long-Term Outcomes (CARING) for Children with COVID Program.
Still, the NIH needed to develop a longer reach. They worked with the National COVID Cohort Collaborative (N3C), which integrates electronic healthcare record data on COVID-19, augmenting it with “an incredibly rich set of data from vulnerable populations.” As of a few months ago, the N3C has multiple millions of participants contributing data to hundreds of ongoing projects and collaborators. (The data is accessible in a cloud archive, which is accessible here.)
The NIH also worked with the All of Us Research program – which collects longitudinal COVID-19 health outcome data alongside phenotypic and serological data – the BioData Catalyst, which provides data from clinical trials and observational studies such as those that evaluated hydroxychloroquine early in the pandemic.
Soon enough, the NIH found itself serving as an aggregator of a wide range of data from various sources – and having to grapple with the logistical implications of coordinating both the data and access to the data across a wide range of interested parties.
“Making all this work together across many different projects really does require some efforts in data harmonization,” Gregurick said. “We’ve been tackling this in two different ways: … common data elements and mapping to data models. In some cases it’s a development of curation strategies within the data hub, … in other cases it’s at the point of collection and really collaborating with our data coordination centers.”
The different stages of the RADx program, for instance, shared around 16 common data elements (CDEs) that could be more easily integrated, but each program also contained its own unique elements. “We’re using those common data elements to help construct data models and data search strategies for ontology,” Gregurick said. “We’re also mapping these to a common data model.”
The NIH has also been working on unifying other supporting technology, such as the researcher authentication services that allow access to various data, tools and hubs, across platforms. More ambitiously, they’re piloting a program to allow the linking of records from a given individual across platforms without compromising that individual’s identity.
“We are now looking to create data linkages across many repositories and many studies, building up ways to enhance data discovery across multiple platforms,” Gregurick said, “and ways to pull and aggregate data together into a workbench that allows for greater analysis of the data no matter where the data sits.”
“Pretty soon we’re gonna start talking about what’s colloquially called ‘long COVID,’” she added. “Many researchers ask questions about long-term morbidity of COVID, and then geographic differences in patient outcomes. Being able to ask those types of questions is really a question of pulling together data from different types of platforms.”
To watch Gregurick’s keynote, click here.
Google Keynote: Harnessing the Scale of Cloud to Accelerate Discovery of COVID-19 Therapeutics
Elsewhere, a trio of researchers were making their own data. Haribabu Arthanari and Christoph Gorgulla (hailing from Harvard) and William Magro (from Google), the speakers for the second keynote, had been faced with a colossal scale problem.
“As many of you know, SARS-CoV-2 uses an arsenal of weapons, its tiny molecular machines, the viral proteins, to attack, invade and infect our cells,” Arthanari explained. “Each one of these proteins is important for the virus to replicate itself, and thus offers a therapeutic opportunity for us to target.”
However: “On average, it takes about 15 seconds for us to take a small molecule, place it in a protein, and derive the docking score. By this token, if I had to screen about one billion molecules, that would take me about 475 years. That is per target. Now, we need to do this for multiple targets in a matter of days.”
So, along with Gorgulla, he worked to develop a new method entirely: VirtualFlow. Described in the resulting research paper as “a highly automated and versatile open-source platform with perfect scaling behaviour that is able to prepare and efficiently screen ultra-large libraries of compounds,” VirtualFlow made short work of the 40 target sites on the SARS-CoV-2 virus, each of which was screened against more than a billion compounds.
“So how is VirtualFlow able to do this in a matter of days?” Gorgulla said. “The key for this is massive parallelization – and for that we need high-performance computing platforms such as Google Cloud.” Indeed, enabled by Elastifile and Slurm for managing files and workloads, the researchers ran VirtualFlow on up to 160,000 CPUs in parallel, supported by Google-provided research credits for Google Cloud access.
Late last year, Harvard and Google Cloud were awarded HPCwire’s Readers’ Choice Award for Best Use of HPC in the Cloud for this work. To learn more, click here.
To watch this keynote, click here.
Much, much more
Beyond the keynotes, the ASF roundtable featured a fireside chat on storage architectures for research data hosted by Quantum’s Eric Bassier; a case study session detailing Intel’s contributions to the fight against COVID-19; and another case study session focused on reducing COVID-19 transmission with early detection through wastewater monitoring.
There were also two highlight sessions from solution providers: one from MemVerge discussing big memory acceleration of single-cell RNA sequencing and another from Panasas discussing optimization of storage performance and capacity for research applications.