Drug Discovery Looks for Its Next Fix

By Michael Feldman

July 31, 2012

Despite the highly profitable nature of the pharmaceutical business and the large amount of R&D money companies throw at creating new medicines, the pace of drug development is agonizingly slow.  Over the last few years, on average, less than two dozen new drugs have been introduced per year. One of the more promising technologies that could help speed up this process is supercomputing, which can be used not only to find better, safer drugs, but also to weed out those compounds that would eventually fail during the latter stages of drug trials.

According to a 2010 report in Nature, big pharma spends something like $50 billion per year on drug research and development. (To put that in perspective, that’s four to five times the total spend for high performance computing.) The Nature report estimates the price tag to bring a drug successfully to market is about $1.8 billion, and rising. A lot of that cost is due to the high attrition rate of drugs, which is caused by problems in absorption, distribution, metabolism, excretion and toxicity that gets uncovered during clinical trials.

Ideally, the drug makers would like know which compounds were going to succeed before they got to the expensive stages of development. That’s where high performance computing can help. The approach is to use molecular docking simulations on the computer to determine if the drug candidate can bind to the target protein associated with the disease. The general idea is to find the key (the small molecule drug) that fits in the lock (the protein).

AutoDock, probably the most common molecular modeling application for protein docking, is a one of the more popular software package used by the drug research community. It played a role in developing some of the more successful HIV drugs on the market. Fortunately, AutoDock is freely available under the GNU General Public License.

The trick is to do these docking simulations on a grand scale. Thanks to the power of modern HPC machines, millions of compounds can now be screened against a protein in a reasonable amount of time. In truth, that timeframe is dependent upon how many cores you can put to the task. For a typical medium-sized cluster that a drug company might have in-house, it would take several weeks to screen just a few thousand compounds against one target protein. To reach a more interactive workflow, you need a something approaching a petascale supercomputer.

But not necessarily an actual supercomputer. Compute clouds have turned out to be very suitable for this type of embarrassing parallel application. For example, in a recent test with 50,000 cores on Amazon’s cloud (provisioned by Cycle Computing), software was able to screen 21 million compounds against a protein target in less than three hours.

Real supercomputers work too. At Oak Ridge National Lab (ORNL), researchers there used 50,000 cores of Jaguar to screen about 10 million drug candidates in less than a day. Jeremy C. Smith, director of the Center for Molecular Biophysics at ORNL, believes his type level of virtual screening is the most cost-effective approach to turbo-charge the drug pipeline. But the real utility of the supercomputing approach, says Smith, is that it can also be used to screen out drugs with toxic side effects.

Toxicity is often hard to detect until it comes time to do clinical trials, the most expensive and time-consuming phase of drug development. Worse yet, sometimes toxicity is not discovered until after the drug has been approved and released into the wild. So identifying these compounds early has the potential to save lots of money, not to mention lives. As Smith says, “If drug candidates are going to fail, you want them to fail fast, fail cheap.”

At the molecular level, toxicity is caused by a drug binding to the wrong protein, one that is actually needed by the body, rather than just selectively binding to the protein causing the condition. The problem is humans have about a thousand proteins, so every potential compound needs to be checked against each one. When you’re working with millions of drug candidates, the job becomes overwhelming, even for the petaflop supercomputers of today. To support the toxicity problem, you’ll need an exascale machine, says Smith.

Besides screening for toxicity, the same exascale setup can be used to repurpose existing drugs for other medical conditions. That is, the drug docking software could use approved drugs as the starting point and try to match them against various target proteins know to cause disease. Right now, drug repurposing is typically discovered on a trial-and-error basis, but the increasing number of compounds that are now in this multiple-use category suggests this could be rich new area of drug discovery.

In any case, sheer compute power is not the complete answer. For starters, the software has to be scaled up to the level of the hardware, and on an exascale machine, that hardware is more than likely going to be based on heterogenous processors. But since the problem is easily parallelized (each docking operation can be performed independently of one another), at least the scaling aspect should be relatively easy to overcome.

The larger problem is that the molecular modeling software itself is imperfect. Unlike a true lock and key, proteins are dynamic structures, and the action of binding to a molecule changes their shape. Therefore, physics simulation is also required to get a more precise match.

AutoDock, for example, is only able to provide a crude match between drug and protein. To get higher fidelity docking, more compute-intensive algorithms are required. Researchers, like those at ORNL, often resort to more precise molecular dynamics codes after getting performing a crude screening run with AutoDock.

None of this is a guarantee that virtual docking on exascale machines is going to launch a golden age of drugs. It’s possible that researchers will discover that there are just a handful of small molecule compounds that actually exhibit both disease efficacy and are non-toxic. But Smith believes this approach is full of promise. “This is the way to design drugs since this mirrors the way nature works,” he says.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Google Cloud Makes Good on Promise to Add Nvidia P100 GPUs

September 21, 2017

Google has taken down the notice on its cloud platform website that says Nvidia Tesla P100s are “coming soon.” That's because the search giant has announced the beta launch of the high-end P100 Nvidia Tesla GPUs on t Read more…

By George Leopold

Cray Wins $48M Supercomputer Contract from KISTI

September 21, 2017

It was a good day for Cray which won a $48 million contract from the Korea Institute of Science and Technology Information (KISTI) for a 128-rack CS500 cluster supercomputer. The new system, equipped with Intel Xeon Scal Read more…

By John Russell

Adolfy Hoisie to Lead Brookhaven’s Computing for National Security Effort

September 21, 2017

Brookhaven National Laboratory announced today that Adolfy Hoisie will chair its newly formed Computing for National Security department, which is part of Brookhaven’s new Computational Science Initiative (CSI). Read more…

By John Russell

HPE Extreme Performance Solutions

HPE Prepares Customers for Success with the HPC Software Portfolio

High performance computing (HPC) software is key to harnessing the full power of HPC environments. Development and management tools enable IT departments to streamline installation and maintenance of their systems as well as create, optimize, and run their HPC applications. Read more…

PNNL’s Center for Advanced Tech Evaluation Seeks Wider HPC Community Ties

September 21, 2017

Two years ago the Department of Energy established the Center for Advanced Technology Evaluation (CENATE) at Pacific Northwest National Laboratory (PNNL). CENATE’s ambitious mission was to be a proving ground for near- Read more…

By John Russell

Stanford University and UberCloud Achieve Breakthrough in Living Heart Simulations

September 21, 2017

Cardiac arrhythmia can be an undesirable and potentially lethal side effect of drugs. During this condition, the electrical activity of the heart turns chaotic, Read more…

By Wolfgang Gentzsch, UberCloud, and Francisco Sahli, Stanford University

PNNL’s Center for Advanced Tech Evaluation Seeks Wider HPC Community Ties

September 21, 2017

Two years ago the Department of Energy established the Center for Advanced Technology Evaluation (CENATE) at Pacific Northwest National Laboratory (PNNL). CENAT Read more…

By John Russell

Exascale Computing Project Names Doug Kothe as Director

September 20, 2017

The Department of Energy’s Exascale Computing Project (ECP) has named Doug Kothe as its new director effective October 1. He replaces Paul Messina, who is stepping down after two years to return to Argonne National Laboratory. Kothe is a 32-year veteran of DOE’s National Laboratory System. Read more…

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blu Read more…

By Merle Giles

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakthrough Science at the Exascale” at the ACM Europe Conference in Barcelona. In conjunction with her presentation, Yelick agreed to a short Q&A discussion with HPCwire. Read more…

By Tiffany Trader

DARPA Pledges Another $300 Million for Post-Moore’s Readiness

September 14, 2017

The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies. Read more…

By Tiffany Trader

IBM Breaks Ground for Complex Quantum Chemistry

September 14, 2017

IBM has reported the use of a novel algorithm to simulate BeH2 (beryllium-hydride) on a quantum computer. This is the largest molecule so far simulated on a quantum computer. The technique, which used six qubits of a seven-qubit system, is an important step forward and may suggest an approach to simulating ever larger molecules. Read more…

By John Russell

Cubes, Culture, and a New Challenge: Trish Damkroger Talks about Life at Intel—and Why HPC Matters More Than Ever

September 13, 2017

Trish Damkroger wasn’t looking to change jobs when she attended SC15 in Austin, Texas. Capping a 15-year career within Department of Energy (DOE) laboratories, she was acting Associate Director for Computation at Lawrence Livermore National Laboratory (LLNL). Her mission was to equip the lab’s scientists and research partners with resources that would advance their cutting-edge work... Read more…

By Jan Rowell

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

Leading Solution Providers

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

GlobalFoundries: 7nm Chips Coming in 2018, EUV in 2019

June 13, 2017

GlobalFoundries has formally announced that its 7nm technology is ready for customer engagement with product tape outs expected for the first half of 2018. The Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This