A group of Oak Ridge National Laboratory researchers working on the Summit supercomputer has developed a new neural network tool for fast extraction of information from cancer pathology reports to speed research and clinical work.
“Manually extracting information is costly, time consuming, and error prone, so we are developing an AI-based tool,” said Mohammed Alawad, research scientist in the ORNL Computing and Computational Sciences Directorate and lead author of a paper published in the Journal of the American Medical Informatics Association on the results of the team’s AI tool.
An account of the work was posted this week on the ORNL site. The researchers developed a multitask convolutional neural network, or CNN—a deep learning model that learns to perform tasks, such as identifying key words in a body of text, by processing language as a two-dimensional numerical dataset.
“We use a common technique called word embedding, which represents each word as a sequence of numerical values,” Alawad said. The effort is part of the DoE and National Cancer Institute’s collaboration the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C).
The research team scaled efficiency by developing a network that can complete multiple tasks in roughly the same amount of time as a single-task CNN.
Here’s a quick summary excerpted from the paper:
- “Multitask CNN (MTCNN) attempts to tackle document information extraction by learning to extract multiple key cancer characteristics simultaneously. We trained our MTCNN to perform 5 information extraction tasks: (1) primary cancer site (65 classes), (2) laterality (4 classes), (3) behavior (3 classes), (4) histological type (63 classes), and (5) histological grade (5 classes). We evaluated the performance on a corpus of 95 231 pathology documents (71 223 unique tumors) obtained from the Louisiana Tumor Registry. We compared the performance of the MTCNN models against single-task CNN models and 2 traditional machine learning approaches, namely support vector machine (SVM) and random forest classifier (RFC).”
- “MTCNNs offered superior performance across all 5 tasks in terms of classification accuracy as compared with the other machine learning models. Based on retrospective evaluation, the hard parameter sharing and cross-stitch MTCNN models correctly classified 59.04% and 57.93% of the pathology reports respectively across all 5 tasks. The baseline models achieved 53.68% (CNN), 46.37% (RFC), and 36.75% (SVM). Based on prospective evaluation, the percentages of correctly classified cases across the 5 tasks were 60.11% (hard parameter sharing), 58.13% (cross-stitch), 51.30% (single-task CNN), 42.07% (RFC), and 35.16% (SVM). Moreover, hard parameter sharing MTCNNs outperformed the other models in computational efficiency by using about the same number of trainable parameters as a single-task CNN.”
The multitask CNN completed and outperformed a single-task CNN for all five tasks within the same amount of time—making it five times as fast. However, Alawad said, “It’s not so much that it’s five times as fast. It’s that it’s n-times as fast. If we had n different tasks, then it would take one-nth of the time per task.”
“The next step is to launch a large-scale user study where the technology will be deployed across cancer registries to identify the most effective ways of integration in the registries’ workflows. The goal is not to replace the human but rather augment the human,” said Gina Tourassi, director of the Health Data Sciences Institute and the National Center for Computational Sciences at the Department of Energy’s Oak Ridge National Laboratory.
Link to JAMIA paper (Automatic extraction of cancer registry reportable information from free-text pathology reports using multitask convolutional neural networks): https://academic.oup.com/jamia/article/27/1/89/5618621?guestAccessKey=815a822e-35ee-4904-a8ee-46c652ecd811