Visit additional Tabor Communication Publications
November 19, 2009
Second ORNL-led team also finalist for Gordon Bell Prize
Nov. 19 -- A team led by Oak Ridge National Laboratory's (ORNL's) Markus Eisenbach was named winner Thursday of the 2009 ACM Gordon Bell Prize, which honors the world's highest-performing scientific computing applications. Another team led by ORNL's Edo Aprà was also among nine finalists for the prize.
Results of the contest were announced in Portland, Ore., during the SC09 international supercomputing conference. The prize is supported by high-performance computing pioneer Gordon Bell and is administered by the Association for Computing Machinery.
Eisenbach and colleagues from ORNL, Florida State University, and the Institute for Theoretical Physics and Swiss National Supercomputing Center achieved 1.84 thousand trillion calculations per second -- or 1.84 petaflops -- using an application that analyzes magnetic systems and, in particular, the effect of temperature on these systems. By accurately revealing the magnetic properties of specific materials--even materials that have not yet been produced -- the project promises to boost the search for stronger, more stable magnets, thereby contributing to advances in such areas as magnetic storage and the development of lighter, stronger motors for electric vehicles.
The application -- known as WL-LSMS -- achieved this performance on ORNL's Cray XT5 Jaguar system, making use of more than 223,000 of Jaguar's 224,000-plus available processing cores and reaching nearly 80 percent of Jaguar's peak performance of 2.33 petaflops. Earlier in the week Jaguar was named number one on the TOP500 list of the world's fastest computers. The system was recently upgraded from four-core processors to six-core processors, boosting its peak performance to 2.33 petaflops.
WL-LSMS allows researchers to directly and accurately calculate the temperature above which a material loses its magnetism--known as the Curie temperature. The team's approach differs from earlier efforts because it sets aside empirical models and their attendant approximations to tackle the system through first-principles calculations.
"What we can do is calculate the Curie temperature for materials with high accuracy without external parameters," Eisenbach explained. "These first-principles calculations are orders of magnitude more computationally demanding than previous models; it's only with a petascale system such as Jaguar that calculations like this become feasible."
WL-LSMS combines two methods to achieve its goal. The first -- known as locally self-consistent multiple scattering, or LSMS -- applies density functional theory to solve the Dirac equation, a relativistic wave equation for electron behavior. The code has a robust history, having been the first code to run at a sustained trillion calculations per second, and earned its developers the prestigious 1998 Gordon Bell Prize. This approach, though, describes a system in its ground state at a temperature of absolute zero, or nearly -460°F. By incorporating a Monte Carlo method known as Wang-Landau, which guides the LSMS application, Eisenbach and his colleagues are able to explore technologically relevant temperatures ranges.
The work improves on previous advances in magnetic materials, Eisenbach said. He noted that materials research has led in the past century to more than a 50-fold increase in the magnetic strength of materials per volume and in the last decade to more than a 100-fold increase in the density of magnetic data storage. Other efforts that may benefit from the research include the design of lighter, more resilient steel and the development of future refrigerators that use magnetic cooling.
Aprà's team -- the other finalist led by an ORNL researcher -- achieved 1.39 petaflops on Jaguar in a first principles, quantum mechanical exploration of the energy contained in clusters of water molecules. The team, comprising members from ORNL, Australian National University, Pacific Northwest National Laboratory (PNNL), and Cray Inc., used a computational chemistry application known as NWChem, which was developed at PNNL.
The application used 223,200 processing cores to accurately study the electronic structure of water by means of a first-principles quantum chemistry technique known as coupled cluster. The team will make its results available to other researchers, who will be able to use this highly accurate data as inputs to their own simulations.
The unprecedented power of the Jaguar system is necessary for these calculations because the bond between water molecules is far more complex than that between other small molecules, and less demanding computational approaches fail to describe the system accurately. Aprà's simulation of a 24-molecule cluster is the first to explore these bonds from first principles using quantum mechanical forces as implemented in the coupled cluster method.
"With a single water molecule it's easy to see the structure," Aprà explained. "But the chemical bond formed by several water molecules clustered together is long range in nature. It's something that cheaper [less computationally demanding] and less accurate quantum mechanical methods don't describe accurately."
Source: Oak Ridge National Laboratory
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.