Visit additional Tabor Communication Publications
November 10, 2009
Nov. 10 -- A University of Delaware research team has been awarded a $451,051 grant from the National Science Foundation's Major Research Instrumentation program to acquire a high-performance computing cluster for use in the rapidly expanding field of computational chemistry.
Computational approaches to understanding complex molecular scale systems are vital to scientists conducting leading-edge research in chemistry and related disciplines, says Doug Doren, professor of chemistry and biochemistry, associate dean for research in the College of Arts and Sciences, and principal investigator for the project.
Doren is working on the project with co-principal investigators Dion Vlachos, Elizabeth Inez Kelley Professor of Chemical Engineering; Michela Taufer, assistant professor of computer and information sciences; and Sandeep Patel, assistant professor of chemistry and biochemistry.
Doren said the NSF funding will be used to purchase a high-performance computing cluster with a novel architecture that combines traditional central processing units (CPUs) with graphics processing units (GPUs).
"The GPUs are the same devices used to calculate how screen images should look on a personal computer or video game," he explained. "While GPUs offer very fast computing speeds at lower prices than traditional CPUs, they are not designed for scientific applications. One of our goals is to learn how to make efficient use of this new architecture to address molecular-scale problems in science and engineering."
About 20 UD faculty members contributed their ideas to the proposal, Doren said, and it is anticipated "they will make heavy use of this new facility."
The faculty "all do research that seeks to describe nature at the molecular scale, but their work cuts across a wide array of disciplines," Doren said. "While many of the participants are chemists and chemical engineers, we also have materials scientists, environmental engineers, physicists, mathematicians and computer scientists involved. We also hope to draw in scientists working on similar problems who have little prior experience with high-performance computing, and others on campus who have special expertise in GPU computing."
The high-performance cluster is important to UD researchers because, Doren said, "Computing has become an essential tool in chemistry and other molecular sciences. A precise mathematical description of matter at the molecular scale, based on quantum mechanics, is well developed but the equations are difficult to solve. Modern computing power, along with some carefully developed approximations, makes it possible to simulate rather complex molecular systems."
In many cases, he added, "computational chemistry methods have already advanced to the point where they can provide reliable predictions of molecular properties that can be used to interpret experiments, guide the design of new experiments, or even determine data that is too difficult or costly to obtain from experiments. A central theme of our proposal is to extend the applications of these methods by making use of GPU co-processors and by developing new approximation methods that can connect the molecular-scale description to observations in the macroscopic world where we all live."
Doren said the new computing facility will provide a dramatic increase in computing power for the molecular sciences at UD. "This large, shared facility will make new computing capacity available to these researchers, while allowing them to explore the use of GPUs. It will also make state-of-the-art computing available to many new users who do not own computers powerful enough for their applications," he said.
UD faculty "are very pleased to have this new resource available for our research and we are grateful to the NSF for making this possible, Doren said. "The NSF program that funded our grant got a large budget increase from the Recovery Act passed last February, because it was recognized that the scientific community could quickly identify valuable investments in research infrastructure that would have long-term benefits."
Doren said he believes one factor that made the UD application compelling was work done over the last year by the three co-principaI investigators to demonstrate the effectiveness of GPUs in molecular simulations. He said Vlachos, Taufer and Patel have developed new simulation software to take advantage of GPUs.
"They find that a widely-used molecular dynamics method runs as fast on one integrated CPU/GPU unit as it would on 21 CPUs," he said, adding, "Another method, known as a Monte Carlo simulation due to its use of probability distributions, runs over 100 times faster when GPUs are used. Since a GPU costs much less than a CPU, these increases in performance are a real bargain. The new software developed at UD to take advantage of GPUs is likely to have a broad impact on the work of other computational scientists as this type of hardware becomes more widely adopted."
Source: University of Delaware
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.