Exascale Computing Project Contributes to Accelerating Cancer Research

March 19, 2024

March 19, 2024 — What happens when Department of Energy (DOE) researchers join forces with chemists and biologists at the National Cancer Institute (NCI)? They use the most advanced high-performance computers to study cancer at the molecular, cellular and population levels. The results offer insights into cancer and accelerate advances in precision oncology and scientific computing.

Predicting cancer type and drug response using histopathology images from the National Cancer Institute’s Patient-Derived Models Repository. Image credit: Rick Stevens, Argonne Lab.

“Working with people who are super-passionate about solving cancer brings an entirely new type of energy and motivation into the collaboration,” said Rick Stevens, Argonne National Laboratory’s associate laboratory director for the Computing, Environment, and Life Sciences Directorate.

The DOE-NCI collaboration, which is part of the Cancer Moonshot, began in 2016 and encompasses three projects: AI-Driven Multiscale Investigation of the RAS/RAF Activation Lifecycle (ADMIRRAL); Innovative Methodologies and New Data for Predictive Oncology Model Evaluation (IMPROVE); and Modeling Outcomes Using Surveillance Data and Scalable AI for Cancer (MOSSAIC).

MOSSAIC automated the analysis and extraction of information from millions of cancer-patient records to help determine optimal cancer-treatment strategies across a range of patient lifestyles, environmental exposures, cancer types and healthcare systems.

ADMIRRAL’s goal was to better understand the RAS oncogenic signaling system. RAS is  a protein embedded in the surface of every cell and switches on and off to send signals to the cell’s interior. This system is a root cause of about 40% of cancers when it gets stuck in the on position. So the researchers are using large-scale computing to build a molecular dynamics model of the protein. They are creating a simulation by applying models ranging from the size of a single cell to the scale of the molecules and their individual atoms. AI can then use the simulations to help discover the intricacies of how RAS works.

IMPROVE developed a framework to compare and evaluate computer models designed to predict drug response, optimize drug screening, and drive precision medicine for cancer patients. These promising models are based on deep learning, a type of machine learning that mimics the brain’s ability to recognize complex patterns and yield accurate predictions from large inputs of raw data.

The DOE’s Exascale Computing Project (ECP) plays a key role in all three of these ventures.

CANDLE Ties It All Together

In particular, these DOE-NCI endeavors are supported by the ECP’s CANcer Distributed Learning Environment (CANDLE) project, which deploys a scalable deep neural network code to exascale computers that can handle more than a million trillion calculations per second.

To create this infrastructure, Stevens, leader of IMPROVE, and colleagues moved AI tools to the exascale platform to build a software environment that enables working on these three projects without duplication of effort across the laboratories. “The Exascale Computing Project achieved great speedups on exascale hardware — it was amazingly successful,” he said.

CANDLE, which wrapped up at the end of 2023, started out as a bit of a computer science project. “We worked on the tools, the libraries for the exascale environment, and dealt with chip and machine performance,” Stevens explains. “Techniques we developed are quite useful for AI and other problems beyond cancer, such as making headway on problems in materials science and gaining a better understanding of COVID-19.”

As the core deep-learning software system underlying multiple DOE-NCI projects, “CANDLE truly enabled open-minded thinking when considering whether machine learning or deep learning is a possibility for a given challenge,” said Eric Stahlberg, director of biomedical informatics and data science at the Frederick National Laboratory for Cancer Research (FNLCR), which joined the collaboration on behalf of NCI to launch CANDLE.

At a technical level, FNLCR and NCI contributed to the early development and direction of CANDLE, ensuring its software would be usable for the broader biomedical research community via developing workshops and supporting training, and bringing essential data to the collaboration to drive development and innovative applications to cancer research.

CANDLE has changed thinking about how to approach cancer drug discovery using data from multiple sources. It also has supported essential research in RAS-related cancers — helping to bridge understanding and experimental observations across different time and size scales. “Its deep-learning models also boost the efficiency of information extraction from patient data to improve the available cancer surveillance research information,” Stahlberg said.

Improving Cancer Drug Discovery

Ultimately, ADMIRRAL and IMPROVE researchers intend to boost cancer drug discovery. If researchers can understand how RAS works or sequence a tumor’s RNA and DNA, they can work to predict drugs with the potential to impact the RAS system.

In a precursor to IMPROVE, researchers compiled decades’ worth of data about known tumors, the drugs used to treat them and their outcomes. “We built machine-learning models to represent both the tumor and the drug to predict the response of the tumor to a given drug,” Stevens said. “The idea was to use them in preclinical experiments to explore new drugs and try to better understand the biology of the tumor — whether it responds or not to the new drug.”

IMPROVE’s approach is closely related to the concept of precision medicine, where patients get customized treatments based on the genetics of their tumors. “We did this for about five years, and were quite successful building these models,” Stevens commented.

Then, the AI community started building lots of similar types of models. “A few years ago, we decided rather than continuing to push on the model itself, we needed to build a system to allow us to compare and benchmark these models against each other, because there are so many of them,” Stevens said. “There are now more than 100 models from groups that are more or less aiming to predict the same thing, so the problem has shifted a bit to being able to understand which models are better for certain tumors or classes of drugs.”

The team discovered that no single way of representing drugs emerged as dominant. “Our representations of drugs had strengths and weaknesses, so we started combining the representations,” Stevens explains. “We did the same thing with characterizing tumors, and at first we thought mutation data would be the most useful, but it turns out that genetic-expression data was the most predictive or informative.” That makes sense given that a gene — mutated or not — creates an impact only if it’s turned on.

This work “provides a way for the community to commonly evaluate models and share insights on data needed to improve these models,” Stahlberg said. “As a result of the IMPROVE project, the collective contributions of scientists into this problem are being brought together, harmonized and compared, which provides greater insight to benefit the community as a whole.”

Modernizing National Cancer Surveillance

ECP’s CANDLE is helping to speed and modernize national cancer surveillance as part of the MOSSAIC project. The team developed and deployed novel deep-learning, natural-language-processing solutions to rapidly screen and extract clinical information from unstructured clinical text documents, including pathology and radiology reports.

Gina Tourassi

CANDLE MOSSAIC is now used by 12 Surveillance, Epidemiology, and End-Results (SEER) registries, where it “scans reports 18,000 times faster than cancer registrars,” said Gina Tourassi, associate lab director, Computing and Computational Sciences Directorate, Oak Ridge National Laboratory (ORNL). She led MOSSAIC with ORNL’s Heidi Hanson, group lead of the biostatistics and biomedical informatics group.

“The model auto-coded 20% of the cases with more than 98% accuracy — saving 7,800 hours of manual screening,” Tourassi added. “This level of performance paves the way for a modernized national cancer surveillance program to achieve near-real time cancer incidence reporting — a process that currently takes 22 months.”

Because this work is part of a translational project and meant to be fully developed for use by research and medical professionals, it presented distinct challenges. The team’s mission was to deliver end-to-end science, identifying existing data sources and tapping into them, evaluating the data, designing the computer models, and delivering AI-based solutions that users could deploy.

Tourassi is “proud that we delivered our milestones — demonstrating the power of transdisciplinary science enabled by world-class computing resources and domain experts that live at the bleeding edge of computational science.”

Cross-Agency Cooperation for the Win

All four of these projects show the power of DOE-NCI cross-agency teamwork.

Early on, the teams spent a lot of time learning the most advanced science and languages, plus new ways of talking to others outside their fields. “This happens in other interdisciplinary collaborations but was particularly necessary here,” Stevens said. “And having partners from NCI allows the computing people to decode how we can help a lot faster.”

The interagency collaboration forced everyone to challenge the status quo within their own fields. “For NCI,” Tourassi said, “the project challenged the notion that deep learning isn’t appropriate for clinical language processing.”

Six years ago, some at NCI were skeptical when CANDLE introduced deep learning for automated information capture from clinical reports to the national surveillance program.

“For DOE, there was skepticism about natural-language processing being an exascale problem,” Tourassi said. “Fast forward to now. Large language models are the posterchild of AI applications on exascale computing platforms. Both agencies were able to advance our respective missions and deliver scientific breakthroughs with lasting value.”

CANDLE is “a tremendous example of team science — working together and sharing insights from multiple domains to create and improve results,” Stahlberg said. “Its capabilities have been guided by real problems to address real needs.”


Source: Sally Johnson, ECP

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Stanford HAI AI Index Report: Science and Medicine

April 29, 2024

While AI tools are incredibly useful in a variety of industries, they truly shine when applied to solving problems in scientific and medical discovery. Researching both the world around us and the bodies we inhabit has c Read more…

Atos/Eviden Find a Strategic Path Forward

April 29, 2024

French IT giant Atos seems to have found a path forward. In recent years, Atos has been struggling financially and has not had much luck finding a buyer for some or all of its technology. Atos is the parent of the Read more…

IBM Delivers Qiskit 1.0 and Best Practices for Transitioning to It

April 29, 2024

After spending much of its December Quantum Summit discussing forthcoming quantum software development kit Qiskit 1.0 — the first full version — IBM quietly debuted the latest version (February 15) and recently provi Read more…

Edge-to-Cloud: Exploring an HPC Expedition in Self-Driving Learning

April 25, 2024

The journey begins as Kate Keahey's wandering path unfolds, leading to improbable events. Keahey, Senior Scientist at Argonne National Laboratory and the University of Chicago, leads Chameleon. This innovative projec Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable quantum memory framework. “This work provides a promising Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Point. The system includes Intel's research chip called Loihi 2, Read more…

Stanford HAI AI Index Report: Science and Medicine

April 29, 2024

While AI tools are incredibly useful in a variety of industries, they truly shine when applied to solving problems in scientific and medical discovery. Research Read more…

IBM Delivers Qiskit 1.0 and Best Practices for Transitioning to It

April 29, 2024

After spending much of its December Quantum Summit discussing forthcoming quantum software development kit Qiskit 1.0 — the first full version — IBM quietly Read more…

Shutterstock 1748437547

Edge-to-Cloud: Exploring an HPC Expedition in Self-Driving Learning

April 25, 2024

The journey begins as Kate Keahey's wandering path unfolds, leading to improbable events. Keahey, Senior Scientist at Argonne National Laboratory and the Uni Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Leading Solution Providers

Contributors

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire