Drug Developers Use Google Cloud HPC in the Fight Against ALS

By Doug Black

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Finding the needle is a trial-and-error process of monumental proportions for scientists at pharmaceutical companies, medical research centers and academic institutions. As models grow in scale so too does the need for HPC resources to run simulations iteratively, to try-and-fail fast until success is found.

That’s all well and good if there’s ready access to HPC on premises. If not, drug developers, such as ALS researcher Dr. May Khanna, Pharmacology Department assistant professor at the University of Arizona, have turned to HPC resources provided by public cloud services. But using AWS, Azure or Google introduces a host of daunting compute management problems that tax the skills and time availability of most on-site IT staffs.

These tasks include data placement, instance provisioning, job scheduling, configuring software and networks, cluster startup and tear-down, cloud provider setup, cost management and instance health checking. To handle these cloud orchestration functions tied to 5,000 cores of Google Cloud Preemptive VMs (PVMs), Dr. Khanna and her team at Arizona turned to Cycle Computing to run “molecular docking” simulations at scale by Schrödinger Glide molecular modeling drug design software.

The results: simulations that would otherwise take months have been compressed to a few hours, short enough to be run during one of Dr. Khanna’s seminars and the output shared with students.

Dr. May Khanna

Developing new drugs to target a specific disease often starts with the building blocks of the compounds that become the drugs. The process begins with finding small molecules that can target specific proteins that, when combined, can interact in a way that becomes the disease’s starting point. The goal is to find a molecule that breaks the proteins apart. This is done by simulating how the small molecules dock to the specific protein locations. These simulations are computationally intensive, and many molecules need to be simulated to find a few good candidates.

Without powerful compute resources, researchers must artificially constrain their searches, limiting the number of molecules to simulate. And they only check an area of the protein known to be biologically active. Even with these constraints, running simulations takes a long time. Done right, molecular docking is an iterative process that requires simulation, biological verification, and then further refinement. Shortening the iteration time is important to advancing the research.

The objective of Dr. Khanna’s work was to simulate the docking of 1 million compounds to one target protein. After a simulation was complete, the protein was produced in the lab, and compounds were then tested with nuclear magnetic resonance spectroscopy.

“It’s a target (protein) that’s been implicated in ALS,” the energetic Dr. Khanna told EnterpriseTech (HPCwire‘s sister pub). “The idea is that the particular protein was very interesting, people who modulated it in different ways found some significant improvement in the ALS models they have with (lab) mice. The closer we can link biology to what we’re seeing as a target, the better chance of actually getting to a real therapeutic.”

“Modulating,” Dr. Khanna explained, is disrupting two proteins interacting in a way that is associated with ALS, a disease that currently afflicts about 20,000 Americans and for which there is no cure. “We’re trying to disrupt them, to release them to do their normal jobs,” she said.

She said CycleCloud plays a central role in running Schrödinger Glide simulations. Without Google Cloud PVMs, simulations would take too long and model sizes would be too small to generate meaningful results. Without CycleCloud, the management of 5,000 PVM nodes would not be possible.

CycleCloud provides a web-based GUI, a command line interface and APIs to define cloud-based clusters. It auto-scales clusters by instance types, maximum cluster size and costing parameters, deploying systems of up to 156,000 cores while validating each piece of the infrastructure. Additionally, it syncs in-house data repositories with cloud locations in a policy / job driven fashion, to lower costs.

It should be noted that the use of Google Cloud’s PVMs, while helping to hold down the cost of running simulations to $200, contribute an additional degree of complexity to Dr. Khanna’s project work. Preemptible compute capacity offers the advantage of a consistent price not subject to dynamic demand pricing, as are other public cloud instances. PVMs are assigned to a job for a finite period of time but – here’s the rub – they can be revoked at any moment. While Dr. Khanna’s workflow was ideal for leveraging PVMs, since it consists of small, short-running jobs, PVMs can disappear at without warning.

In the case of Dr. Khanna’s ALS research work, said Jason Stowe, CEO of Cycle Computing said, “if you’re willing to getting rid of the node, but you’re able to use it during that timeframe at substantially lower cost, that allows you get a lot more computing bang for your buck. CycleCloud automates the process, taking care of nodes that go away, making sure the environment isn’t corrupted, and other technical aspects that we take care of so the user doesn’t have to.”

The simulation process is divided into two parts. The first step uses the Schrödinger LigPrep package, which converts 2D structures to the 3D format used in the next stage. This stage started with 4 GB of input data staged to an NFS filer. The output data was approximately 800KB and was stored on the NFS filer as well. To get the simulation done as efficiently as possible, the workload was split into 300 smaller jobs to assist in scaling the next stage of the workflow. In total, the first stage consumed 1500 core-hours of computation.

The Schrödinger Glide software package performs the second stage of the process, where the actual docking simulation is performed. Each of the 300 sub-jobs consists of four stages, each with an attendant prep stage. The total consumption was approximately 20,000 core-hours using 5,000 cores of n1-highcpu-16 instances. Each instance had 16 virtual cores with 60 gigabytes of RAM. The CycleCloud software dynamically sized the cluster based on the number of jobs in queue and replaced preempted instances.

Dr. Khanna’s research is the early stages of a process that, if successful, could take several years before reaching human clinical trials.

“The faster we can do this, the less time we have to wait for results, so we can go back and test it again and try to figure out what compounds are really binding,” she said, “the faster the process can move along.”

Dr. Khanna said plans are in place to increase the size of the pool of potential compounds, as well as include other proteins in the simulation to look for interactions that would not typically be seen until later in the process. The team will also simulate over the entire surface of the protein instead of just a known-active area unlocking “an amazing amount of power” in the search process, she said.

“That jump between docking to binding to biological testing takes a really long time, but I think we can move forward on that with this cloud computing capacity,” she said. “The mice data that we saw was really exciting…, you could see true significant changes with the mice. I can’t tell you we’ve discovered the greatest thing for ALS, but showing that if we take these small molecules and we can see improvement, even that is so significant.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in 2017 with scale-up production for enterprise datacenters and Read more…

By Tiffany Trader

Fine-Tuning Severe Hail Forecasting with Machine Learning

July 20, 2017

Depending on whether you’ve been caught outside during a severe hail storm, the sight of greenish tinted clouds on the horizon may cause serious knots in the pit of your stomach, or at least give you pause. There’s g Read more…

By Sean Thielen

Trinity Supercomputer’s Haswell and KNL Partitions Are Merged

July 19, 2017

Trinity supercomputer’s two partitions – one based on Intel Xeon Haswell processors and the other on Xeon Phi Knights Landing – have been fully integrated are now available for use on classified work in the Nationa Read more…

By HPCwire Staff

Fujitsu Continues HPC, AI Push

July 19, 2017

Summer is well under way, but the so-called summertime slowdown, linked with hot temperatures and longer vacations, does not seem to have impacted Fujitsu's output. The Japanese multinational has made a raft of HPC and A Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE Servers Deliver High Performance Remote Visualization

Whether generating seismic simulations, locating new productive oil reservoirs, or constructing complex models of the earth’s subsurface, energy, oil, and gas (EO&G) is a highly data-driven industry. Read more…

Researchers Use DNA to Store and Retrieve Digital Movie

July 18, 2017

From abacus to pencil and paper to semiconductor chips, the technology of computing has always been an ever-changing target. The human brain is probably the computer we use most (hopefully) and understand least. This mon Read more…

By John Russell

The Exascale FY18 Budget – The Next Step

July 17, 2017

On July 12, 2017, the U.S. federal budget for its Exascale Computing Initiative (ECI) took its next step forward. On that day, the full Appropriations Committee of the House of Representatives voted to accept the recomme Read more…

By Alex R. Larzelere

Summer Reading: IEEE Spectrum’s Chip Hall of Fame

July 17, 2017

Take a trip down memory lane – the Mostek MK4096 4-kilobit DRAM, for instance. Perhaps processors are more to your liking. Remember the Sh-Boom processor (1988), created by Russell Fish and Chuck Moore, and named after Read more…

By John Russell

Women in HPC Luncheon Shines Light on Female-Friendly Hiring Practices

July 13, 2017

The second annual Women in HPC luncheon was held on June 20, 2017, during the International Supercomputing Conference in Frankfurt, Germany. The luncheon provides participants the opportunity to network with industry lea Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Fine-Tuning Severe Hail Forecasting with Machine Learning

July 20, 2017

Depending on whether you’ve been caught outside during a severe hail storm, the sight of greenish tinted clouds on the horizon may cause serious knots in the Read more…

By Sean Thielen

Fujitsu Continues HPC, AI Push

July 19, 2017

Summer is well under way, but the so-called summertime slowdown, linked with hot temperatures and longer vacations, does not seem to have impacted Fujitsu's out Read more…

By Tiffany Trader

Researchers Use DNA to Store and Retrieve Digital Movie

July 18, 2017

From abacus to pencil and paper to semiconductor chips, the technology of computing has always been an ever-changing target. The human brain is probably the com Read more…

By John Russell

The Exascale FY18 Budget – The Next Step

July 17, 2017

On July 12, 2017, the U.S. federal budget for its Exascale Computing Initiative (ECI) took its next step forward. On that day, the full Appropriations Committee Read more…

By Alex R. Larzelere

Women in HPC Luncheon Shines Light on Female-Friendly Hiring Practices

July 13, 2017

The second annual Women in HPC luncheon was held on June 20, 2017, during the International Supercomputing Conference in Frankfurt, Germany. The luncheon provid Read more…

By Tiffany Trader

Satellite Advances, NSF Computation Power Rapid Mapping of Earth’s Surface

July 13, 2017

New satellite technologies have completely changed the game in mapping and geographical data gathering, reducing costs and placing a new emphasis on time series Read more…

By Ken Chiacchia and Tiffany Jolley

Intel Skylake: Xeon Goes from Chip to Platform

July 13, 2017

With yesterday’s New York unveiling of the new “Skylake” Xeon Scalable processors, Intel made multiple runs at multiple competitive threats and strategic Read more…

By Doug Black

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This