GPU-based Deep Learning Enhances Drug Discovery Says Startup

By John Russell

May 26, 2016

Sifting the avalanche of life sciences (LS) data for insight is an interesting and important challenge. Many approaches are used with varying success. Recently, improved hardware – primarily GPU-based – and better neural networking schemes are bringing deep learning to the fore. Two recent papers report the use of deep neural networks is superior to typical machine learning (support vector machine model) in sieving LS data for drug discovery and personalized medicine purposes.

The two papers, admittedly driven by a commercial interest (Insilico Medicine), are nevertheless more evidence of deep neural network (DNN) progress in LS research where large datasets with high dimensionality have long been difficult to handle. Using DNN to train models and produce answers is proving quite effective; in these two studies both straightforward and more complicated neural network techniques were used. Snapshot:

Part of what’s interesting here is the broad applicability of the DNN approach. As the authors (listed below) note there are many in silico approaches to drug discovery and disease classification, including efforts to use transcriptional response to predict functional properties of drugs. Neural networks’ natural knack for handling high dimensional data is an important capability in LS. Deep learning has already proven very valuable in a range of activities spanning simple image recognition to physics applications.

Broadly, neural networks try to emulate the way biological neural networks operate. Artificial neural networks are generally presented as systems of interconnected “neurons” which exchange messages between each other. The connections have numeric weights that can be tuned based on experience, making neural nets adaptive to inputs and capable of learning. In essence they can be trained to understand and solve classes of problems.

For example, a neural network for handwriting recognition might be defined by a set of input neurons that are activated by the pixels of an input image. After being weighted and transformed by a function (determined by the network’s designer), the activations of these neurons are then passed on to other neurons. This process is repeated until finally, the output neuron that determines which character was read is activated.

The first study cited here relied on a standard multilayer perceptron (MLP), which is a feed forward artificial neural network model that maps sets of input data onto a set of appropriate outputs. In this instance, researchers worked with data from three cell lines (A549, MCF-7 and PC-3 cell lines from the LINCS project) that were treated with various compounds to elicit gene expression transcriptional profiles. Researchers began by classifying the compounds into therapeutic categories with DNN based solely on the transcriptional profiles. “After that we independently used both gene expression level data for “landmark genes” and pathway activation scores to train DNN classifier.” In total, the study analyzed 26,420 drug perturbation samples. Shown below is a representation of the DNN used in the drug study.

Study design: Gene expression data from LINCS Project was linked to 12 MeSH therapeutic use categories. DNN was trained separately on gene expression level data for “landmark genes” and pathway activation scores for significantly perturbed samples, forming an input layers of 977 and 271 neural nodes, respectively.
Study design: Gene expression data from LINCS Project was linked to 12 MeSH therapeutic use categories. DNN was trained separately on gene expression level data for “landmark genes” and pathway activation scores for significantly perturbed samples, forming an input layers of 977 and 271 neural nodes, respectively.

The details of the study are fascinating. Use of all the criteria was key to accuracy and the DNN effectiveness in coping with high dimensionality was a critical enabler.

In the second study, a more complicated ensemble approach proved most effective. Notably, this wasn’t a gene expression data analysis; rather it was based on blood-based markers. Data from roughly 60,000 blood samples from a single laboratory were analyzed. The five most predictive markers – albumin, glucose, alkaline phosphatase, urea, and erythrocytes – were identified. The best performing DNN achieved 81.5 percent accuracy, while the entire ensemble had 83.5 percent accuracy. The paper suggests the ensemble approach is likely most effective for integration of multimodal data and tracking of integrated biomarkers for aging.

DevBox_3qrtrOpen_wMonitorBoth studies required substantial compute power including the parallel processing capability of GPUs. NVIDIA assisted by providing early access to its DIGITS DevBox, which is a roughly 30Tflop deep learning machine featuring 4 Titan X GPU. “We also used a 2X Tesla K80 GPU system,” said Alex Zhavoronkov, an author on both papers and CEO of Insilico Medicine. “The original DNN in the molecular pharmaceutics [work] was trained on a Datalytics GPU cluster in New Mexico,” said Alex Zhavoronkov, CEO of Insilico Medicine and an author on both papers.

It bears repeating that Insilico Medicine was the main driver behind both papers and has a business interest in bolstering its credentials; that said, deep learning is a relatively small community where collaborations between academic, commercial, and technology suppliers are considerable. (For a snapshot of trends at the leading edge see HPCwire article, Beyond von Neumann, Neuromorphic Computing Steadily Advances.)

Insilico, founded in the 2014 timeframe, chose to focus on deep learning and signaling pathway activation analysis, which is an effective way to reduce dimensionality in gene expression data. “We are essentially a drug discovery engine now,” said Zhavoronkov, who has long been familiar with GPU technology having worked for several years at ATI Technologies. He’s also an ex-pat from Russia who has maintained close ties there; Insilico Medicine has grown to a staff of 39 including 22 in Moscow. Eleven are focused exclusively on deep learning.

Zhavoronkov divides the current deep learning community into three segments: one that is using off-the-shelf systems and tools; a second that is pushing the boundary and developing their own tools; and elite third components primarily focused on neural network R&D and developing new paradigms, citing Google DeepMind as one of the latter. “We fall into the middle category but also with domain expertise in drug discovery. There are few companies that have both.”

Perhaps predictably bullish, he said, “Both papers are first in class and demonstrate that deep learning can be very powerful in both drug discovery and biomarker development. In a short time we got over 800 strong hypotheses for both efficacy and toxicity of multiple drugs in many diseases.”

[i] Deep learning applications for predicting pharmacological properties of drugs and drug repurposing using transcriptomic data, Molecular Pharamaceutics, published by the American Chemical Society, http://pubs.acs.org/doi/abs/10.1021/acs.molpharmaceut.6b00248; the manuscript is now posted on the “Just Accepted” service of the ACS. Authors listed: Alexander Aliper, Sergey Plis, Artem Artemov, Alvaro Ulloa, Polina Mamoshina, Alex Zhavoronkov

[ii] Deep biomarkers of human aging: Application of deep neural networks to biomarker development, published in the May issue of Aging (Vol 8, No5), http://www.impactaging.com/papers/v8/n5/full/100968.html. Authors listed: Evgeny Putin, Polina Mamoshina, Alexander Aliper, Mikhail Korzinkin, Alexey Moskalev, Alexey Kolosov, Alexander Ostrovskiy, Charles Cantor, Jan Vijg, and Alex Zhavoronkov

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Companies D-Wave and Rigetti Again Face Stock Delisting

October 4, 2024

Both D-Wave (NYSE: QBTS) and Rigetti (Nasdaq: RGTI) are again facing stock delisting. This is a third time for D-Wave, which issued a press release today following notification by the SEC. Rigetti was notified of delisti Read more…

Alps Scientific Symposium Highlights AI’s Role in Tackling Science’s Biggest Challenges

October 4, 2024

ETH Zürich recently celebrated the launch of the AI-optimized “Alps” supercomputer with a scientific symposium focused on the future possibilities of scientific AI thanks to increased compute power and a flexible ar Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvidia GPUs). Recently, MLCommons introduced the results of its Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever physical processor they want, without making code changes, the Read more…

IBM Quantum Summit Evolves into Developer Conference

October 2, 2024

Instead of its usual quantum summit this year, IBM will hold its first IBM Quantum Developer Conference which the company is calling, “an exclusive, first-of-its-kind.” It’s planned as an in-person conference at th Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed that the company will release Falcon Shores as a GPU. The com Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvi Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever ph Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

IBM and NASA Launch Open-Source AI Model for Advanced Climate and Weather Research

September 25, 2024

IBM and NASA have developed a new AI foundation model for a wide range of climate and weather applications, with contributions from the Department of Energy’s Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Building the Quantum Economy — Chicago Style

September 24, 2024

Will there be regional winner in the global quantum economy sweepstakes? With visions of Silicon Valley’s iconic success in electronics and Boston/Cambridge� Read more…

How GPUs Are Embedded in the HPC Landscape

September 23, 2024

Grasping the basics of Graphics Processing Unit (GPU) architecture is crucial for understanding how these powerful processors function, particularly in high-per Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced export controls on quantum computing technologies as well as new controls for advanced semiconductors and additive Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire