Drug Discovery Looks for Its Next Fix

By Michael Feldman

July 31, 2012

Despite the highly profitable nature of the pharmaceutical business and the large amount of R&D money companies throw at creating new medicines, the pace of drug development is agonizingly slow.  Over the last few years, on average, less than two dozen new drugs have been introduced per year. One of the more promising technologies that could help speed up this process is supercomputing, which can be used not only to find better, safer drugs, but also to weed out those compounds that would eventually fail during the latter stages of drug trials.

According to a 2010 report in Nature, big pharma spends something like $50 billion per year on drug research and development. (To put that in perspective, that’s four to five times the total spend for high performance computing.) The Nature report estimates the price tag to bring a drug successfully to market is about $1.8 billion, and rising. A lot of that cost is due to the high attrition rate of drugs, which is caused by problems in absorption, distribution, metabolism, excretion and toxicity that gets uncovered during clinical trials.

Ideally, the drug makers would like know which compounds were going to succeed before they got to the expensive stages of development. That’s where high performance computing can help. The approach is to use molecular docking simulations on the computer to determine if the drug candidate can bind to the target protein associated with the disease. The general idea is to find the key (the small molecule drug) that fits in the lock (the protein).

AutoDock, probably the most common molecular modeling application for protein docking, is a one of the more popular software package used by the drug research community. It played a role in developing some of the more successful HIV drugs on the market. Fortunately, AutoDock is freely available under the GNU General Public License.

The trick is to do these docking simulations on a grand scale. Thanks to the power of modern HPC machines, millions of compounds can now be screened against a protein in a reasonable amount of time. In truth, that timeframe is dependent upon how many cores you can put to the task. For a typical medium-sized cluster that a drug company might have in-house, it would take several weeks to screen just a few thousand compounds against one target protein. To reach a more interactive workflow, you need a something approaching a petascale supercomputer.

But not necessarily an actual supercomputer. Compute clouds have turned out to be very suitable for this type of embarrassing parallel application. For example, in a recent test with 50,000 cores on Amazon’s cloud (provisioned by Cycle Computing), software was able to screen 21 million compounds against a protein target in less than three hours.

Real supercomputers work too. At Oak Ridge National Lab (ORNL), researchers there used 50,000 cores of Jaguar to screen about 10 million drug candidates in less than a day. Jeremy C. Smith, director of the Center for Molecular Biophysics at ORNL, believes his type level of virtual screening is the most cost-effective approach to turbo-charge the drug pipeline. But the real utility of the supercomputing approach, says Smith, is that it can also be used to screen out drugs with toxic side effects.

Toxicity is often hard to detect until it comes time to do clinical trials, the most expensive and time-consuming phase of drug development. Worse yet, sometimes toxicity is not discovered until after the drug has been approved and released into the wild. So identifying these compounds early has the potential to save lots of money, not to mention lives. As Smith says, “If drug candidates are going to fail, you want them to fail fast, fail cheap.”

At the molecular level, toxicity is caused by a drug binding to the wrong protein, one that is actually needed by the body, rather than just selectively binding to the protein causing the condition. The problem is humans have about a thousand proteins, so every potential compound needs to be checked against each one. When you’re working with millions of drug candidates, the job becomes overwhelming, even for the petaflop supercomputers of today. To support the toxicity problem, you’ll need an exascale machine, says Smith.

Besides screening for toxicity, the same exascale setup can be used to repurpose existing drugs for other medical conditions. That is, the drug docking software could use approved drugs as the starting point and try to match them against various target proteins know to cause disease. Right now, drug repurposing is typically discovered on a trial-and-error basis, but the increasing number of compounds that are now in this multiple-use category suggests this could be rich new area of drug discovery.

In any case, sheer compute power is not the complete answer. For starters, the software has to be scaled up to the level of the hardware, and on an exascale machine, that hardware is more than likely going to be based on heterogenous processors. But since the problem is easily parallelized (each docking operation can be performed independently of one another), at least the scaling aspect should be relatively easy to overcome.

The larger problem is that the molecular modeling software itself is imperfect. Unlike a true lock and key, proteins are dynamic structures, and the action of binding to a molecule changes their shape. Therefore, physics simulation is also required to get a more precise match.

AutoDock, for example, is only able to provide a crude match between drug and protein. To get higher fidelity docking, more compute-intensive algorithms are required. Researchers, like those at ORNL, often resort to more precise molecular dynamics codes after getting performing a crude screening run with AutoDock.

None of this is a guarantee that virtual docking on exascale machines is going to launch a golden age of drugs. It’s possible that researchers will discover that there are just a handful of small molecule compounds that actually exhibit both disease efficacy and are non-toxic. But Smith believes this approach is full of promise. “This is the way to design drugs since this mirrors the way nature works,” he says.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the large research community it supports, it also sought to optimize Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impact on how large a piece of the DL pie a user can finally enj Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX developer kit. The Clara partnerships announced during... Read more…

AWS Solution Channel

Research computing with RONIN on AWS

To allow more visibility into and management of Amazon Web Services (AWS) resources and expenses and minimize the cloud skills training required to operate these resources, AWS Partner RONIN created the RONIN research computing platform. Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU Technology Conference (GTC), held virtually once more due to the pandemic, the company unveiled its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. The announcement of the new... Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the larg Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impa Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX developer kit. The Clara partnerships announced during... Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU Technology Conference (GTC), held virtually once more due to the pandemic, the company unveiled its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. The announcement of the new... Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computing. Nvidia is pitching the DPU as an active engine... Read more…

Nvidia’s Newly DPU-Enabled SuperPod Is a Multi-Tenant, Cloud-Native Supercomputer

April 12, 2021

At GTC 2021, Nvidia has announced an upgraded iteration of its DGX SuperPods, calling the new offering “the first cloud-native, multi-tenant supercomputer.” Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire