Enlisting Deep Learning in the War on Cancer

By John Russell

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines.

By any standard, the U.S. War on Cancer and the Precision Medicine Initiative’s (PMI) are ambitious. Past Wars on Cancer haven’t necessarily fared well, which is not to say much hasn’t been accomplished. Today’s timing seems more promising. Progress in biomedical science and the ramp-up of next gen leadership computers (en route to exascale) are powerful enablers. Stir in the rapid emergence of deep learning to exploit data-driven science and many see greater cause for optimism. Not by chance was the opening plenary panel at SC16 on precision medicine and the role of HPC.

The three JDACS4C pilots span molecular to population scale efforts in support of the CANcer Distributed Learning Environment (CANDLE) project: they are intended to “provide insight into scalable machine learning tools; deep learning, simulation and analytics to reduce time to solution; and inform design of future computing solutions.” The hope is also to establish “a new paradigm for cancer research for years to come by making effective use of the ever-growing volumes and diversity of cancer-related data to build predictive models, provide better understanding of the disease and, ultimately, provide guidance and support decisions on anticipated outcomes of treatment for individual patients.”

Rick Stevens, ANL

These are ambitious goals. Sorting out JDACS4C’s precise lineage is a little challenging – it falls broadly under the Precision Medicine Initiative, NCI Cancer Moonshot, and has been also lumped under NSCI. Stevens noted the early discussion to create the effort started a couple of years ago with the first funding issued in the August time frame. Here’s a snapshot of the three pilots:

  • RAS Molecular Project. This project (Molecular Level Pilot for RAS Structure and Dynamics in Cellular Membranes) is intended to develop new computational approaches supporting research already being done under the RAS Initiative. Ultimately the hope is to refine our understanding of the role of the RAS (gene family) and its associated signaling pathway in cancer and to identify new therapeutic targets uniquely present in RAS protein membrane signaling complexes.
  • Pre-Clinical Screening. This project (Cellular Level Pilot for Predictive Modeling for Pre-clinical Screening) will develop “machine learning, large-scale data and predictive models based on experimental biological data derived from patient-derived xenografts.” The idea is to create a feedback loop, where the experimental models inform the design of the computational models. These predictive models may point to new targets in cancer and help identify new treatments.

Not surprisingly, there are many organizational pieces required. NCI components include the Center for Biomedical Informatics and Information Technology (CBIIT), the Division of Cancer Treatment and Diagnosis (DCTD), the Division of Cancer Control and Population Science (DCCPS), and the Frederick National Laboratory for Cancer Research. There are also four DOE National Laboratories formally designated on the project – Argonne National Laboratory, Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory.

As the projects came together, “We realized each had a need for deep learning and different uses of it. So the idea is that we would all work together on building both the software environment and network topologies and everything we would need for the three projects so we wouldn’t have duplication,” said Stevens. The researchers defined key benchmarks that “are tractable kinds of deep learning problems that are aligned with what we have to solve for the different cancer sub problems.”

An early first step was attracting vendor participation – something that turned out to be easy said Stevens because virtually all the major HPC vendors are aggressively ramping up DL roadmaps. Most see the JDACS4C pilots as opportunities to learn and refine. Stevens said JDASC4C has collaborations with Intel, Cray, NVIDIA, IBM, among others.

“All of the labs have DGX-1s and NVIDIA has optimized most of the common frameworks for the different GPUs, Pascal, etc. The DGX-1 is like an appliance so anything we build that runs on the DGX-1 can be easily distributed. Intel has it own extensive plans and not all is public yet. I can say that we are collaborating with all the right parts of Intel,” said Stevens, an ANL researcher and leader of the pre-clinical screening project.

Indeed Intel has been busy, buying Nervana (a complete platform for DL) and recently rolling out expanded plans. “They are talking about versions of Knights X series that are optimized for machine learning. Knights Mill is the first version of that part of their roadmap,” said Stevens. The chip giant also introduced a DL inference accelerator card at SC16; it’s a field-programmable gate array (FPGA)-based hardware and software solution for neural network acceleration. Stevens suggests Intel, like NVIDIA, is developing an appliance strategy.

“Intel is very much trying to define a strategy that differentiates some level between the platform for training and for inferencing. Most deep learning systems now do inferencing on the ‘quasi’ client side – on smaller platforms than used for training. Intel wants to ensure “future IA architectures are good at inferencing,” he said.

Not surprisingly  there’s a fair amount of effort assessing the many DL frameworks coming out of the Google, Microsoft, Facebook et al. “We are evaluating which frameworks work best for our problems and we are working with vendors to optimize them on the hardware. We’re also working with Livermore which has an internal project to build a scalable artificial neural network framework call LBANN,” said Stevens.

The plan is to develop “our models in a way that is independent of the frameworks so we can swap out the frameworks as those evolve without having to recode our models. This is a very common approach with deep learning where you have a scripting layer that captures your model representation – the meta algorithms for training and management data, etc. – and we are working with both the academic community and the NVIDIA on the workflow engine at the top. So we have kind of a stacked architecture and it involves collaborating with all of the different groups around the DL landscape.”

“What’s interesting,” said Stevens, “is the vendors for the next-gen platforms are strongly embracing the architectural ideas and features needed for accelerated machine learning in addition to traditional kind of physics-driven simulation.” He noted that market pressures and the fast growth of DL compared to the traditional HPC are pushing them in this direction. “It’s also giving us insight into DOE applications that are going to start looking like this – where there will be traditional physics-driven simulation, but where often we can find a way to leverage machine learning [too].”

Sharing the learning is an important component of the pilot projects. “We are abstracting model problems for the machine learning community to work on that are kind of sanitized versions of the seven candle benchmarks we’re working on,” said Stevens. That will include distributable data, code, all to be available at GitHub. The first of those elements are expected in Q2.

Individual pilot teams are also mounting their own outreach activities with the academic community. In terms of compute power for the pilots, “We are targeting platforms, particularly the CORAL platforms, new machines at Argonne, Oak Ridge and Livermore, and [eventually] exascale. Everything is sort of ecumenical so its not GPU specific or manycore specific.”

It’s interesting to look at the different ways in which the three projects plan to use deep learning.

The RAS project, at the molecular scale, is the smallest dimensional scale of all of the projects. RAS, you may know, is a well-known family of oncogenes that code for signaling proteins embedded in the cell membrane. These proteins control signaling pathways that extend into the cell and drive very many diverse cellular processes. RAS is currently implicated in about 30 percent of cancers including some of the toughest such as pancreatic cancer. The pilot project will combine simulation and wet lab screening data in an effort to elaborate the details of the RAS-related signaling cascades and hopefully identify key places to intervene and new drugs to use.

Even a relatively small tumor may have “thousands of mutations, both driver mutations and many passenger mutations,” said Stevens. These genetic miscues can alter the important details of signaling networks. For many years RAS itself as well as its associated signaling networks have been drug targets but as Stevens pointed out, “the behavior of that signaling network is very non-intuitive. Sometimes if you hit one of the downstream components, it actually creates negative feedback, which actually increases the effect you are trying to inhibit.”

In the RAS project, the simulation is basically a molecular dynamics exercise conducted at various granularities extending all the way down to atomistic behavior including quantum effects. The computational power required, not surprisingly, depends on the level of granularity being simulated and can be substantial.

“Machine learning is being used to track the state space that the simulation is going through and to make decisions – do we zoom in here, do we zoom out, do we change the parameters that we are looking in a different part to the ensemble space. It’s basically acting like a smart supervisor of this simulation to more effectively use it.

“In some sense it’s like the network is watching a movie and saying, “OK, I’ve seen this part of the movie before, let’s fast forward, or wow this is really interesting I’ve never seen this before, let’s use slow motion and zoom in.” That’s sort of what the machine learning is doing in the simulation. It’s able to fast forward and skip around in some sense,” said Stevens.

The pre-clinical screening project, led by Stevens, is an ambitious effort to sift through basically as much cancer  preclinical and clinical data as it can lay hold of and combine that with new data generated from mouse models to build predictive models of drug-tumor interactions. It’s an in silico and experimental feedback approach. Ultimately, given a specific tumor whose molecular attributes (gene expression, SNPs, proteomics, etc) have been characterized, it should be possible to plug that data into a model to determine the best therapeutic regime.

The subtlety here, said Stevens, is there has been a lot of machine learning work in this done at kind of the small scale, that is on single classes of tumors or relatively small classes of drugs. “What we are trying to do with the deep learning is to integrate all of this information across thousands of cell lines, tens of thousands of compounds that have been screened against smaller number of cell lines, and then be able to project that into a mouse. You grow a colony of mice derived from that human tumor, and these mice become proxies for human clinical trials. So I can try different compounds on the colony of tumor mice to provide information about how my tumor might respond to them if given as a drug.”

A huge challenge is being able to make sense of all the historical data, much of which is unstructured and often subjective (e.g. pathology reports). “One of the first things that we have done is to build classifiers to tell us what type the tumor is or where the body site is [based on diverse data],” he said. Not infrequently the data may be suspect. “If it’s a new dataset we run it through our classifiers and they may say, really, this is not from the liver, it’s from some other place.”

As a rule, the preclinical data is outcome based; it doesn’t explain how the result was achieved.

“Right now we can build machine learning models that are pretty accurate at say predicting a drug response or tumor type or outcome but they can’t tell us very effectively why. They are not explanatory, not mechanistic,” said Stevens, “What we want to do is bring in mechanistic models or mechanistic data in some way and hybridize that with machine learning models so that we get two things. We get the ability to have a highly accurate predictable model but also a model that can explain why that prediction. So the idea of this hybrid approach is a wide open space and we think that this will generalize into many fields.” Obtaining large and high quality data for training models remains challenging, he said.

The third project strives to develop models able to make population scale forecasts, what Stevens call “patient trajectories.” It’s basically mining surveillance data across the country. Although somewhat dispersed, there is a great deal of patient data held by NCI, NIH, FDA, pharma, and payor organizations (pathology reports, treatments, outcomes, lifestyle, demographics, etc.). Unfortunately, like a lot of biomedical data, it’s largely unstructured. “We can’t really compute on it in the way we want to so we are using machine learning to translate the unstructured data into structured data we can compute on,” said Stevens

“So for example we want to read all the pathology reports with a machine and pull out, say the biomarkers, the mutational state, or the drugs and so on such that we can then build profiles that are consistent. Think of it as a population-based model. In preclinical screening pilot let’s say we uncover some treatments and strategies that are very effective on a certain type of cancer. We want to take that information and feed it into the population model and say “If this became a common therapy, how much would it change the statistics globally or nationally” or something like that.”

It’s also a way to link all of the pilots, said Steven. Insight from the RAS project might be later used to look at subclasses of cancers where the new treatment might work; that in turn put it into the population model to understand what the impact of that might be.

It’s still early days for the JDACS4C pilot projects, but hopes are high. Stevens noted both NCI and DOE are getting access to things they don’t readily have. “NCI does not have a lot of mathematicians and computer scientists, which DOE has. They also don’t have access to leadership machines. What we (DOE) are getting is access to all of this great experimental data, experimental facilities, public databases.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Quantinuum Reports 99.9% 2-Qubit Gate Fidelity, Caps Eventful 2 Months

April 16, 2024

March and April have been good months for Quantinuum, which today released a blog announcing the ion trap quantum computer specialist has achieved a 99.9% (three nines) two-qubit gate fidelity on its H1 system. The lates Read more…

Mystery Solved: Intel’s Former HPC Chief Now Running Software Engineering Group 

April 15, 2024

Last year, Jeff McVeigh, Intel's readily available leader of the high-performance computing group, suddenly went silent, with no interviews granted or appearances at press conferences.  It led to questions -- what's Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Institute for Human-Centered AI (HAI) put out a yearly report to t Read more…

Crossing the Quantum Threshold: The Path to 10,000 Qubits

April 15, 2024

Editor’s Note: Why do qubit count and quality matter? What’s the difference between physical qubits and logical qubits? Quantum computer vendors toss these terms and numbers around as indicators of the strengths of t Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Computational Chemistry Needs To Be Sustainable, Too

April 8, 2024

A diverse group of computational chemists is encouraging the research community to embrace a sustainable software ecosystem. That's the message behind a recent Read more…

Hyperion Research: Eleven HPC Predictions for 2024

April 4, 2024

HPCwire is happy to announce a new series with Hyperion Research  - a fact-based market research firm focusing on the HPC market. In addition to providing mark Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire