Beyond the immediate investigations surrounding new variants and pharmaceuticals, the Covid-19 pandemic is raising the profile of another crucial, anxiety-inducing question: where will the next pandemic come from, and how can we be better-prepared? As Covid took hold, Artem Babaian—a researcher at the University of Cambridge—scoured enormous genomic datasets for viral fingerprints, leveraging vast cloud resources and identifying over 130,000 previously unknown RNA viruses, including nine coronaviruses. In an interview with HPCwire, Babaian explained how the project came to fruition.
The Sequence Read Archive
“There’s this thing called the Sequence Read Archive [SRA],” he said, “and you can think of this as the Library of Alexandria for genetics data. Everyone in the world, when you’re doing biology research and you generate sequencing data, you deposit your data into the Sequence Read Archive along with your study—and this is publicly available.”
“So there’s this giant public domain database,” he continued. “The problem was that it was locked up in the National Institutes of Health [NIH]—they have [the National Center for Biotechnology Information] NCBI, which is like their informatics database. And so NCBI had this data and when I was accessing it, it was really slow because everything had to go through networking, right? So you can imagine—there’s literally petabytes of data and you can’t really get to it because it’s going over the internet.”
Then, the NIH announced a modernization program for the NCBI, including the NIH STRIDES initiative, which sought to modernize datasets like the Sequence Read Archive for easier use.
“What they did was they cloned the data onto cloud platforms, including Amazon Web Services,” continued Babaian, who, at the time, was working on a cancer genetics project. “Now this, just by pure luck, was completed in February of 2020. So I was kind of excited because I was like, ‘oh, I might be able to do cancer genetics research faster’—and then comes March.”
Roping in an HPC engineering friend, Babaian began to pursue an idea. “We could actually probably analyze the data faster than anyone else—if we can use it natively on AWS.”
The task, as they had conceptualized it, was monolithic: a hunt for one specific gene associated with RNA viruses (like SARS-CoV-2) in 20 petabytes of raw sequencing data. “We looked for this gene called RNA-dependent RNA polymerase,” Babaian said, “and you can think of it as like the heart of an RNA virus, where all the RNA viruses have to have this one gene, and it’s different enough between RNA viruses that we can distinguish different viruses based on this gene.”
“We could actually probably analyze the data faster than anyone else—if we can use it natively on AWS.”
In about a month, the duo had built the first set of prototypes for infrastructure to tackle this task using Amazon Web Services (AWS), eventually calling that infrastructure Serratus. “[Serratus] is like a set of configuration files for how to run different AWS components together,” Babaian explained.
“The main idea was that we would really have to be maximizing for I/O, which was the bottleneck in this analysis,” he said. “Most people consider CPU as the bottleneck in genetics analysis, but for particular research like this, where we’re saying, ‘okay, we want a very specific sequence and we just need to look at a lot of data,’ you’re really limited by I/O.”
The team—which grew steadily as more and more researchers volunteered their time—built Serratus to “chunk out” the data, distributing the massive dataset over more nodes to optimize the I/O speeds. As the project moved forward, it incorporated more and more elements: auto-scaling groups in AWS for different instance types; real-time tracking of nodes using Prometheus; a Grafana-enabled dashboard.
The researchers approached AWS, eventually working through the University of British Columbia’s Cloud Innovation Centre (CIC), a partnership between AWS and the university that connected researchers to cloud resources. They gained access to sufficient resources through the CIC, with AWS providing the necessary credits for the work.
“Most people consider CPU as the bottleneck in genetics analysis, but for particular research like this … you’re really limited by I/O.”
The team used AWS’ r5.xlarge instances for the downloading cluster—up to 2,000 at once when at capacity, Babaian said—due to their memory. “For the aligner—this is kind of the workhorse of the cluster, this is where the actual CPU computations are done that are challenging—we would use [c5n.xlarge]” — up to 3,500 of those at a time. The merge stage would use a couple hundred more of the r5.xlarge instances, the scheduler would run on r5.8xlarge instances and the monitor on r5.4xlarge. “We weren’t actually using very big instances,” Babaian said. “We opted to go with small instances because you have a certain threshold of I/O that all the small instances have.”
And through all of this, disk space was limited—often just 10GB or so on each node, with the data predominantly bypassing the disks and streaming directly from Amazon S3 into the CPUs. “Because writing to disk and then reading from disk would slow down the process, right?” Babaian said.
“I know what it’s like to use university supercomputers, and there are other published groups that have done similar work, but they weren’t able to scale up,” he said. “What they report is about 3,000 datasets per day. You could be generous and say, okay, maybe five to ten thousand datasets per day would be the maximum.”
“We were processing over one million datasets per day,” he continued.
“We were processing over one million datasets per day.”
130,000+ newly identified viruses
“We started with the input query of 15,000 known RNA viruses,” Babaian said. “We processed 5.7 million datasets in 11 days for $24,000 and we found just over 130,000 new RNA viruses, including nine new coronaviruses.”
Babaian stressed that the coronaviruses the researchers found were not human-infecting. “These are very evolutionarily distant viruses—they infect aquatic animals, so: seahorses, axolotl, fugu fish, catfish, eel…” Despite that, he said, the coronaviruses the team discovered were surprising and informative. “There’s actually a very key difference in the coronaviruses that we discovered compared to all previously reported coronaviruses, where the virus was not on a single molecule—it actually is occurring on two separate molecules.” This, he explained, was more similar to influenza, and challenges the textbook definition of a coronavirus.
“There’s actually a very key difference in the coronaviruses that we discovered compared to all previously reported coronaviruses[.]”
But Babaian has his sights set on a much broader scope than coronaviruses. All around the world, he said, biologists were sequencing samples from everything from ice cores to rare species in the Amazon. “All this data’s centralized,” he said. “So what we’ve done, in essence, is turned all the world’s sequencing data into a giant viral surveillance network where—as data is going online from all corners of the earth—we can monitor the entirety of the data now to see where known and new viruses show up.”
For example, he explained the case of a SARS-like coronavirus, found in bats, that was originally identified in Hong Kong in 2007. “We picked it up in a cornfield in mainland China in 2015,” he said. “The most likely explanation is that there was a bat pooped on the corn, a researcher was working with the corn [and picked] up the coronavirus as part of their corn study … They would never have thought to look for coronaviruses here because it doesn’t match up, right?”
This sort of scenario, he said, happens all the time.
“Our world is essentially immersed in viruses,” he said. “A kind of crazy thing to think about is that we did this tenfold expansion of RNA viruses, and we know for a fact that we did this with limited sensitivity in the algorithm. And so we’ve just skimmed the surface of what’s available. If I were to say we could double this number by upping the sensitivity a little bit, that would be really conservative.”
Babaian said that there are an estimated one trillion virus species on Earth. The goal, he said: 100 million RNA virus species identified by the end of the decade.
Finally, Babaian took care to laud both the project’s support from AWS and his fellow researchers. “If you want to know ‘what’s the L3 cache size of a c5n.xlarge,’ that’s not in the documentation because it’s too specific,” he said. “But we were always able to get an [AWS] engineer pulled and then get that information quickly.”
And of his team: “You couldn’t have dreamt of a better set of people to work with and, honestly, to volunteer their time to a project that was more likely to fail than not.”