Visit additional Tabor Communication Publications
November 19, 2009
Second ORNL-led team also finalist for Gordon Bell Prize
Nov. 19 -- A team led by Oak Ridge National Laboratory's (ORNL's) Markus Eisenbach was named winner Thursday of the 2009 ACM Gordon Bell Prize, which honors the world's highest-performing scientific computing applications. Another team led by ORNL's Edo Aprà was also among nine finalists for the prize.
Results of the contest were announced in Portland, Ore., during the SC09 international supercomputing conference. The prize is supported by high-performance computing pioneer Gordon Bell and is administered by the Association for Computing Machinery.
Eisenbach and colleagues from ORNL, Florida State University, and the Institute for Theoretical Physics and Swiss National Supercomputing Center achieved 1.84 thousand trillion calculations per second -- or 1.84 petaflops -- using an application that analyzes magnetic systems and, in particular, the effect of temperature on these systems. By accurately revealing the magnetic properties of specific materials--even materials that have not yet been produced -- the project promises to boost the search for stronger, more stable magnets, thereby contributing to advances in such areas as magnetic storage and the development of lighter, stronger motors for electric vehicles.
The application -- known as WL-LSMS -- achieved this performance on ORNL's Cray XT5 Jaguar system, making use of more than 223,000 of Jaguar's 224,000-plus available processing cores and reaching nearly 80 percent of Jaguar's peak performance of 2.33 petaflops. Earlier in the week Jaguar was named number one on the TOP500 list of the world's fastest computers. The system was recently upgraded from four-core processors to six-core processors, boosting its peak performance to 2.33 petaflops.
WL-LSMS allows researchers to directly and accurately calculate the temperature above which a material loses its magnetism--known as the Curie temperature. The team's approach differs from earlier efforts because it sets aside empirical models and their attendant approximations to tackle the system through first-principles calculations.
"What we can do is calculate the Curie temperature for materials with high accuracy without external parameters," Eisenbach explained. "These first-principles calculations are orders of magnitude more computationally demanding than previous models; it's only with a petascale system such as Jaguar that calculations like this become feasible."
WL-LSMS combines two methods to achieve its goal. The first -- known as locally self-consistent multiple scattering, or LSMS -- applies density functional theory to solve the Dirac equation, a relativistic wave equation for electron behavior. The code has a robust history, having been the first code to run at a sustained trillion calculations per second, and earned its developers the prestigious 1998 Gordon Bell Prize. This approach, though, describes a system in its ground state at a temperature of absolute zero, or nearly -460°F. By incorporating a Monte Carlo method known as Wang-Landau, which guides the LSMS application, Eisenbach and his colleagues are able to explore technologically relevant temperatures ranges.
The work improves on previous advances in magnetic materials, Eisenbach said. He noted that materials research has led in the past century to more than a 50-fold increase in the magnetic strength of materials per volume and in the last decade to more than a 100-fold increase in the density of magnetic data storage. Other efforts that may benefit from the research include the design of lighter, more resilient steel and the development of future refrigerators that use magnetic cooling.
Aprà's team -- the other finalist led by an ORNL researcher -- achieved 1.39 petaflops on Jaguar in a first principles, quantum mechanical exploration of the energy contained in clusters of water molecules. The team, comprising members from ORNL, Australian National University, Pacific Northwest National Laboratory (PNNL), and Cray Inc., used a computational chemistry application known as NWChem, which was developed at PNNL.
The application used 223,200 processing cores to accurately study the electronic structure of water by means of a first-principles quantum chemistry technique known as coupled cluster. The team will make its results available to other researchers, who will be able to use this highly accurate data as inputs to their own simulations.
The unprecedented power of the Jaguar system is necessary for these calculations because the bond between water molecules is far more complex than that between other small molecules, and less demanding computational approaches fail to describe the system accurately. Aprà's simulation of a 24-molecule cluster is the first to explore these bonds from first principles using quantum mechanical forces as implemented in the coupled cluster method.
"With a single water molecule it's easy to see the structure," Aprà explained. "But the chemical bond formed by several water molecules clustered together is long range in nature. It's something that cheaper [less computationally demanding] and less accurate quantum mechanical methods don't describe accurately."
Source: Oak Ridge National Laboratory
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.