May 22, 2014

A Heterogeneous Approach to Molecular Dynamics

Tiffany Trader
protein folding graphic

As director of the biophysics program at Stanford University, Vijay Pande understands that cloud is no replacement for supercomputers like the petascale Blue Waters machine, but the scientist is having success using loosely-coupled cloudy cores for molecular dynamics research.

Pande has been using the Blue Waters system at the National Center for Supercomputing Applications (NCSA) to study protein folding errors to determine which errors are correlated with diseases like Alzheimer’s, Parkinson’s, Mad Cow disease and many cancers. By determining which errors lead to disease and what kinds of drugs can target the folding errors, there is great potential to treat or cure these classes of debilitating and deadly diseases.

Large-scale molecular dynamics (MD) simulations have traditionally been run on tightly-coupled, fast-networked supercomputers, considered necessary because the slowest link in the process is transferring data between cores. Pande, however, has pioneered an alternate method that leverages the efficient parallelization of distributed computing but avoids the communication bottleneck. Pande determined that shorter, independent simulations run on heterogeneous hardware, like cloud computing. As a bonus the infrastructure also handled hardware failures better because when a single simulation terminates, the rest continue. It’s a complementary approach to traditional MD simulation that completes many short runs in the same time envelope.

For the initial stages of the work, the researchers in Pande’s group utilize Folding@home and Google Exacycle to execute detailed molecular dynamics (MD) simulations of protein folding. Folding@home is the long-running volunteer computing project, powered by the spare cycles of its contributors, most of whom are using desktops or laptops. Each computer runs a set of independent MD simulations and returns its results to Folding@home. Google Exacycle uses essentially the same architecture except that all the cycles are supplied by Google researchers. This kind of grid or distributed computing, considered a flavor of cloud computing by some, is good for workloads with low I/O and communication requirements.

For the next step, researchers take the results of the first generation runs and pass these to Blue Waters. At this point, a tool called MSMBuilder identifies molecules that are similar in structure and clusters them into microstates. It then determines which molecules have reached a long-lived, or metastable, state. The microstates that pass through this screening provide the starting point for the second round of model runs. This molecule subgroup is passed back to Folding@home for a second generation of runs. It’s a process that may iterate several times during a single experiment.

Pande’s work is yet another example of the trend towards heterogeneity in HPC, where a workflow, or in this case, a part of a workflow, is matched up with the most appropriate resource.

But Pande does not see his work as competing with traditional approaches. In fact, the Stanford researcher thinks a combination of both approaches will be necessary to exploit next-generation computing resources as they head toward exascale.

More information about the project is available here.

Share This