The COVID-19 pandemic is producing massive amounts of data – and that data is producing a positive avalanche of academic literature. To help sift through those tens of thousands of research papers and synthesize COVID-19 knowledge, researchers at Lawrence Berkeley National Laboratory have produced a text mining tool powered by supercomputing and machine learning.
The tool, called COVIDScholar, uses natural language processing (NLP) to scan academic papers on COVID-19 and make the results easily searchable. It was developed following the White House Office of Science and Technology Policy’s mid-March call to action on AI tools for data and text mining against COVID-19. Within a week, the Berkeley Lab researchers had an early version of the tool operational.
“Our objective is to do information extraction so that people can find non-obvious information and relationships,” said Gerbrand Ceder, a Berkeley Lab scientist who is helping to lead the project, in an interview with Berkeley Lab. “That’s the whole idea of machine learning and natural language processing that will be applied on these datasets.”
Smart big data analysis tools like these are necessary to make sense of the COVID-19 literature, which quickly reached overwhelming levels. “There’s no doubt we can’t keep up with the literature, as scientists,” Kristin Persson, another Berkeley Lab scientist leading the project. “We need help to find the relevant papers quickly and to build correlations between papers that may not, on the surface, look like they’re talking about the same thing.”
Within a month, the team had collected over 61,000 research papers in the field, with around 200 more appearing every day. COVIDScholar incorporates automated scripts that pull those papers, standardize them and index them for searching. “Within 15 minutes of the paper appearing online, it will be on our website,” said Amalie Trewartha, one of the lead developers of the tool.
On the surface, COVIDScholar is an advanced search engine: it returns results, sorted into subcategories, and recommends similar articles. But soon, its functionality will run much deeper. “We’re ready to make big progress in terms of the natural language processing for ‘automated science,’” said John Dagdelen, another of the lead developers. “You can use the generated representations for concepts from the machine learning models to find similarities between things that don’t actually occur together in the literature, so you can find things that should be connected but haven’t been yet.”
To run COVIDScholar, the researchers turned to supercomputers at the National Energy Research Scientific Computing Center (NERSC). NERSC’s current flagship supercomputer is Cori, a Cray XC40 system rated at 14 Linpack petflops. (Edison, its previous XC30-based flagship, was retired around this time last year.)
“It couldn’t have happened somewhere else,” Trewartha said. “We’re making progress much faster than would’ve been possible elsewhere. It’s the story of Berkeley Lab really. Working with our colleagues at NERSC, in Biosciences, at UC Berkeley, we’re able to iterate on our ideas quickly.”
To access COVIDScholar, click here.