TACC: Conquering Breast Cancer Using Supercomputers, Data, and Mathematical Modeling

May 21, 2024

May 21, 2024 — Breast cancer leads worldwide among cancers in women, claiming nearly 670,000 lives in 2022 according to the World Health Organization. TACC supercomputers give scientists the computational resources and innovative data analysis tools they need to make new discoveries in understanding and treating breast cancer.

Breast cancer researchers are enlisting supercomputers in making new discoveries to improve treatment and understanding of the deadly disease. TACC systems and expertise are giving scientists much-needed data and computational resources for modeling tumor growth, optimizing treatment combinations, analyzing biopsy collections, and more. Credit: iStock.

The following examples illustrate different strategies where advanced computing is making strides in conquering breast cancer.

Digital Twins in Oncology

Mathematical modeling has helped improve predictions of how triple-negative breast cancer (TNBC) tumors will respond to treatment, according to research led by Tom Yankeelov of the Oden Institute for Computational Engineering and Sciences at UT Austin.

TNBC cells lack three commonly overexpressed biomarkers in breast cancer — receptors of estrogen, progesterone, and the human epidermal growth factor receptor 2 (HER2). TNBC is an aggressive form of breast cancer with fewer treatment options and is more common in Black women and all women under 40.

Yankeelov co-authored a 2022 study published in the journal Cancer Research that used MRI data from 56 patients with TNBC to develop calibrated models to achieve early, patient-specific resolved predictions of the tumor response of patients with TNBC.

“Using patient specific imaging data, we calibrated our biology-based, mathematical model to make predictions of how tumors grow in space and time,” Yankeelov said. “These predictions have shown to be highly accurate when predicting the response of triple negative breast cancer patients to standard neoadjuvant chemotherapy.” This type of chemotherapy is widely accepted as the standard-of-care for early TNBC, but it comes with concerns of clinical benefits versus harm from the treatment.

Thomas Yankeelov of the Oden Institute for Computational Engineering and Sciences at UT Austin develops calibrated mathematical models used to improve predictions of how cancer tumors grow and respond to therapy. Credit: Oden Institute.

Improved predictions provide physicians with guidance on whether a particular treatment is likely to work. “If our model predicts that the treatment is going to be beneficial, then they have more confidence staying the course with chemotherapy. Conversely, if our model predicts that the treatment is not going to be beneficial, then they have more confidence finding an alternative intervention,” Yankeelov said.

Yankeelov’s mathematical models describe how tumor cells change in space and time due to factors such as how the cells migrate, how they proliferate, and how they respond to therapy.

“What we do is make MRI measurements that let us calibrate those model parameters based on an individual patient’s MRI measurements,” Yankeelov said. “Once the model is calibrated, we run it forward to predict how that patient’s tumor will grow in space and time —this prediction can then be compared to actual measurements in the patient at a future time. It is these predictions that we are getting surprisingly good at.”

Going forward, his lab’s goal is to go beyond making a prediction of whether a patient will respond to therapies or not. Instead, it is about using mathematical modeling to identify an optimal intervention strategy.

“If you have a model that can accurately predict the spatial and temporal development of a tumor, then we use a supercomputer to try an array of treatment schedules to identify the one that works best. That is, we use the mathematical model to build a ‘digital twin’ to try a myriad of treatment schedules to identify the one with the highest probability of success. That is where the research and field is going,” Yankeelov added.

Yankeelov’s lab used TACC’s Stampede2 and supercomputer and Corral high performance storage in developing digital twins. It’s a fast turnaround — the goal is to get the digital twins to work within 24 hours of getting a patient’s data to help a physician with treatment decisions within 24 hours, according to Yankeelov. To reach that goal requires access to a supercomputer.

“Over the last eight years, TACC has provided extensive computational support for our research efforts via Lonestar5, Lonestar6, and Frontera,” Yankeelov said. “Indeed, it started within the first weeks of our arrival in Austin where TACC staff visited our lab to provide a rapid tutorial on how to start using the systems. TACC has been there every step of the way as we develop methods for improving the treatment of — and outcomes for — patients battling cancer.”

HER2+ and Combined Therapies

HER2+ breast cancer overexpresses the gene that makes the HER2 protein —it is characterized as an aggressive breast cancer that can respond well to treatments such as Trastuzumab (a monoclonal antibody), which typically is administered in combination with Doxorubicin (a chemotherapy drug). The challenge for researchers and physicians lies in optimizing the combination of these two drugs to maximize treatment efficacy.

Ernesto Lima of the Oden Institute uses TACC’s Lonestar6 supercomputer to develop computer models that optimize treatment outcomes for HER2+ breast cancer. Credit: TACC.

“I developed several mathematical models to assess their ability to replicate experimental data with mice receiving various drug combinations obtained by our collaborator Anna Sorace,” said Ernesto Lima of the Oden Institute.

Lima co-authored along with Yankeelov a 2022 study published in Computational Methods in Applied Mechanics and Engineering. It developed a family of models to capture the effects of combination Trastuzumab and Doxorubicin on tumor growth to optimize the outcome of the combination therapy while minimizing the dosage and thereby the toxic side-effects necessary to achieve tumor control.

“We created 10 models and calibrated them using the experimental data,” Lima said. “Calibration involves adjusting parameters, such as the proliferation rate, which dictates how fast the tumor volume increases over time to align the model’s output with the experimental data.”

Lima was awarded supercomputer allocations through The University of Texas Research Cyberinfrastructure project on TACC’s Lonestar6 system to calibrate the models, computations that when parallelized ran 13 times faster than in serial. Parallelization takes large calculations and divides them into smaller ones that run simultaneously, versus running the calculations one-at-a-time.

HER2+ breast cancer overexpresses the gene that makes the protein HER2 receptor. Credit: Ernesto Lima, Oden Institute.

After identifying the best model to replicate the data, Lima’s team optimized the treatment protocol.

“Using our model, we determined the optimal order and timing of drug delivery to maximize treatment efficacy. One treatment protocol, with the same drug amount as in the experiments, achieved a 45 percent reduction in tumor size compared to the experimental controls,” he said.

The team sought ways to maintain treatment efficacy while reducing the drug concentration because of potential toxicity. “We successfully reduced the concentration of Doxorubicin by almost 43 percent, while maintaining the same treatment outcome as in the experiments,” Lima added.

“Without TACC, our ability to explore diverse treatment options and solve complex mathematical models, driving forward our understanding of tumor biology, would be significantly hindered,” he continued.

To validate their theoretical results, Sorace and her team are evaluating the identified protocols in a new set of experiments with mice. Preliminary results are hopeful — they suggest that the experimental protocol is more effective than the original protocols. However, there is a long road ahead before they can enter clinical trials.

“The experiments were done with a limited number of doses per drug and treatment protocols,” Lima concluded. “However, the framework itself could be applied to different types of treatments where you have multiple drugs being delivered.”

Biopsy Data Gold Mine

UT Austin has gained a veritable goldmine of de-identified breast cancer data and preserved frozen tissue samples of other carcinomas, thanks to a generous donation in the spring of 2024 from James L. (Jim) Wittliff and his wife and collaborator, Mitzie, of the University of Louisville School of Medicine.

Mitzie (left) and Jim (right) Wittliff of the University of Louisville School of Medicine donated a large collection of breast cancer data and preserved frozen tissue samples to UT Austin in hopes of making it more available for future discoveries. Ari Kahn (center) leads the project at TACC to make the data accessible to more scientists. Credit: TACC.

“This Database and Tissue Biorepository contains among the most highly quantified datasets of breast cancer biomarkers in the world, with several of the assays such as those for estrogen and progestin receptor proteins representing gold standard breast cancer tests,” said Wittliff.

In the 1980s, Wittliff was co-developer with NEN/DuPont of these latter two biomarker tests which were approved by the FDA. More than 5,000 frozen pristine breast, endometrial, ovarian, and colon cancer biopsies and nuclear pellets containing DNA that were collected from patients that Wittliff’s Clinical Laboratory served and curated through a lifetime of research have been transferred and are now stored at the Dell Medical School. In addition, a treasure trove of de-identified comprehensive biomarker and clinical data will be stored and managed at TACC.

“Our immediate goal is to analyze these data, probably in the context of the NIH’s The Cancer Genome Atlas Program and other data,” said Ari Kahn of the Life Sciences Computing Group at TACC.

The irreplaceable biopsies are now preserved for other scientists to use for clinical trials in silico and to develop future companion diagnostic tests. Many of the tissue specimens have data associated with them such as protein tumor markers; genomic data on gene expression; patient characteristics such as age, sex, and smoking history; disease properties such as tumor size and pathology; and clinical follow-up such as surgeries and chemotherapy treatments.

“Wittliff is energized to expedite the use of the comprehensive data and unique samples to advance cancer diagnosis, treatment approaches, and ways to assess risk of recurrence of carcinomas, and is excited to support UT Austin, his alma mater, with this amazing gift,” Kahn added. “TACC will steward the data on TACC’s Corral system and is planning on making it available in the future to other scientists online through tools such as a web portal.”

Cancer and AI

Artificial intelligence has emerged as a tool for the sciences helping researchers make progress on biological problems such as high throughput virtual drug screening and planning chemical synthesis pathways. According to Yankeelov, it is important to point out the fundamental limitations of AI in what it can inform scientists about the most important problems in oncology.

“In studying cancer, the problem with the AI approach is that cancer is a notoriously heterogeneous disease. In fact, it is not just one disease — it is more than 100 diseases. The issue is with needing a training set to calibrate an AI algorithm,” Yankeelov said.

For example, consider a patient with TNBC cancer who contracts one of the five different subtypes of triple-negative breast cancer that are labeled.

“To use an AI-based approach to predict how this patient needs to be treated, one needs to have a training data set that consists of that subtype of triple-negative breast cancer in addition to all of the possible therapeutic regimens that could be received,” Yankeelov said.

“That training set does not exist, and it will never exist because the diseases are getting more specifically labelled and the treatments are getting more targeted. Furthermore, even if it did exist, it does not account for the unique characteristics of this patient’s cancer because the patient is different than everyone else in that training set.”

Challenging Road Ahead

Cancer remains one of the biggest health challenges facing society. According to Yankeelov and Lima, the computational resources provided by TACC are essential in advancing tumor models and treatment options by facilitating rigorous testing and refinement of various mathematical models.

TACC offers scientists the computational resources they need to make discoveries that are effective for breast cancer patients. Rising survival rates over the past decade for breast cancer offer a glimmer of hope, thanks to awareness campaigns and increased funding for research.


Source: Jorge Salazar, TACC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

HPE and NVIDIA Join Forces and Plan Conquest of Enterprise AI Frontier

June 20, 2024

The HPE Discover 2024 conference is currently in full swing, and the keynote address from Hewlett-Packard Enterprise (HPE) CEO Antonio Neri on Tuesday, June 18, was an unforgettable event. Other than being the first busi Read more…

Slide Shows Samsung May be Developing a RISC-V CPU for In-memory AI Chip

June 19, 2024

Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to release an AI accelerator with heavy in-memory processing, b Read more…

ASC24 Student Cluster Competition: Who Won and Why?

June 18, 2024

As is our tradition, we’re going to take a detailed look back at the recently concluded the ASC24 Student Cluster Competition (Asia Supercomputer Community) to see not only who won the various awards, but to figure out Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch that D-Wave’s brand of analog quantum computing (quantum Read more…

Apple Using Google Cloud Infrastructure to Train and Serve AI

June 18, 2024

Apple has built a new AI infrastructure to deliver AI features introduced in its devices and is utilizing resources available in Google's cloud infrastructure.  Apple's new AI backend includes: A homegrown foun Read more…

Argonne’s Rick Stevens on Energy, AI, and a New Kind of Science

June 17, 2024

The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is the rapid improvement and implementation of artificial intelligence (AI) tools, while the Read more…

HPE and NVIDIA Join Forces and Plan Conquest of Enterprise AI Frontier

June 20, 2024

The HPE Discover 2024 conference is currently in full swing, and the keynote address from Hewlett-Packard Enterprise (HPE) CEO Antonio Neri on Tuesday, June 18, Read more…

Slide Shows Samsung May be Developing a RISC-V CPU for In-memory AI Chip

June 19, 2024

Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch Read more…

Shutterstock_666139696

Argonne’s Rick Stevens on Energy, AI, and a New Kind of Science

June 17, 2024

The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is the rapid improvement and imp Read more…

Under The Wire: Nearly HPC News (June 13, 2024)

June 13, 2024

As managing editor of the major global HPC news source, the term "news fire hose" is often mentioned. The analogy is quite correct. In any given week, there are Read more…

Labs Keep Supercomputers Alive for Ten Years as Vendors Pull Support Early

June 12, 2024

Laboratories are running supercomputers for much longer, beyond the typical lifespan, as vendors prematurely deprecate the hardware and stop providing support. Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Highlights from GlobusWorld 2024: The Conference for Reimagining Research IT

June 11, 2024

The Globus user conference, now in its 22nd year, brought together over 180 researchers, system administrators, developers, and IT leaders from 55 top research Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire