Visit additional Tabor Communication Publications
August 21, 2012
A group of researchers at the University of California, San Diego (UCSD) has established a new approach to simulating molecular behavior. By running an enhanced sampling algorithm on a GPU-equipped desktop, the team was able to achieve millisecond-scale protein simulations. Prior to this, similar research required the use of Anton, a multi-million dollar, purpose-built supercomputer specifically designed for molecular modeling. HPCwire spoke with project members Ross Walker and Romelia Salomon-Ferrer about their research.
A primary challenge in the study of protein dynamics is the ability to simulate interactions over relatively long time periods. “The problem we’ve always had is that the biological timescale is really at the high-microsecond/low-millisecond time scale,” said Walker. “That’s where most of the interesting large-scale motions in proteins are occurring.”
He went on to explain that conventional CPU clusters could handle a 50-nanosecond simulation per day. Hybrid systems (those accelerated by GPUs) perform slightly better, achieving around 75 to 100 nanoseconds in a day. But that’s still 100 times shorter than a microsecond.
Eventually the simulations hit a wall, limiting their ability to model interactions past a given amount of time. The primary issue lies with interconnect technology, according to Walker. He said that additional GPUs could be added to the nodes, but it would only help if system bandwidth was doubled and latency cut in half.
This dilemma prompted D.E. Shaw Research (a company founded by hedge fund billionaire David Shaw), to advance drug discovery by focusing on molecular dynamics, and to then create the Anton supercomputer. The system consists of specialized ASICs and a custom Torus interconnect. Using this unique architecture, Anton has the ability to outperform traditional supercomputers by two to three orders of magnitude, simulating up to 25 microseconds per day.
While Shaw’s design has obvious benefits in speed and accuracy, its proprietary approach makes gaining access to an Anton machine rather difficult. For academic researchers, there is but a single machine in production, at the Pittsburgh Supercomputing Center (PSC).
So the team at UCSD considered changing the algorithms, enabling them to be run on basic commodity hardware. “Do we really have to stick with the equations we’ve been using for the past 30 years?” asked Walker. “Could we try and act smarter with these equations and tailor them for specific things we want to look at?”
They developed a technique called accelerated molecular dynamics (aMD), which optimizes the conformational space sampling of a given protein molecule. The technique was developed based on a collaboration with Howard Hughes Medical Institute (HHMI), and UCSD professor Andrew McCammon, co-author of the research. According to an official statement, the group ran an aMD simulation on a desktop equipped with just a pair of NVIDIA GTX580s.
The researchers analyzed the bovine pancreatic trypsin inhibitor (BPTI), a relatively small molecule as proteins go. It took around 10 days of computation to capture 500 nanoseconds of protein folding, which is 2,000 times shorter than millisecond-scale simulation performed by Anton. However, the aMD run accurately represented all the different structural states returned by the much longer supercomputer simulation. While the UCSD team used a Fermi-based GPU to complete their application run, according to Walker and and Salomon-Ferrer, a Kepler-generation unit, like the K10, would improve processing time by about 30 percent.
The most obvious advantage to this approach is its ability to perform accurate protein simulations on thousand-dollar desktop systems. That opens up this type of research to thousands of scientists, rather than just those select few with custom-built supercomputers at their disposal.
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.