Visit additional Tabor Communication Publications
December 02, 2010
ARGONNE, Ill., Nov. 30 -- Four researchers at the U.S. Department of Energy's (DOE) Argonne National Laboratory lead projects that have been awarded a total of 65 million hours of computing time on Argonne's energy-efficient Blue Gene/P ("Intrepid") supercomputer. The researchers will conduct advanced simulation and analysis, performing virtual experiments that would be almost impossible and impractical in the natural world. They will also develop scalable system software needed to fully harness the power of supercomputers.
"The Department of Energy's supercomputers provide an enormous competitive advantage for the United States," said Energy Secretary Steven Chu. "This is a great example of how investments in innovation can help lead the way to new industries, new jobs and new opportunities for America to succeed in the global marketplace."
The Argonne-led projects are among 57 high-impact research projects aimed at breakthroughs in clean energy, climate science and fundamental research. DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program enables scientists and engineers to conduct cutting-edge research in just weeks or months, rather than years or decades, by providing access and support to powerful supercomputing resources at DOE's Leadership Computing Facilities at Argonne National Laboratory in Illinois and Oak Ridge National Laboratory in Tennessee.
"By providing millions of hours of computing time on Argonne's Intrepid and the Cray XT5 ("Jaguar") at Oak Ridge, the DOE INCITE awards allow us to address some of the nation's most challenging scientific problems," said Rick Stevens, associate laboratory director for computing, environment and life sciences at Argonne.
The projects, selected competitively based on their potential to advance scientific discovery, range from improving battery technology to better understanding health and disease. They are profiled below in brief summaries. A full listing of awards, with detailed technical descriptions, is available online on the Advanced Scientific Computing Research website.
Paul Fischer, a senior computational scientist, was awarded 25 million hours on the Intrepid to conduct simulation and analysis of advanced nuclear reactor designs. "Advanced simulation is a critical component in bringing advanced reactor technology to fruition in an economic and timely manner," said Fischer.
As part of Argonne's Simulation-Based High-Efficiency Advanced Reactor Prototyping (SHARP) project, Fischer and his team are studying open questions concerning the thermal-hydraulic performance of several components in next-generation reactors. Thermal-hydraulic performance issues figure prominently in understanding how to design safe and efficient reactors; they include coolant mixing, pumping requirements and natural circulation, under a variety of operating conditions.
Andrew Binkowski, a structural biologist, leads an Argonne team in applying the most advanced methods in biomolecular simulations and analysis to further our understanding of human health and disease. "A major obstacle to accurate biomolecular modeling is the number of approximations necessary to make the runtime feasible," said Binkowski. "The vast computing resources now remove some of these constraints, allowing us to study more advanced physics-based methods." Binkowski and his team will use the 20 million hours of computer time awarded on the Intrepid to study protein-ligand binding interactions. The team will also evaluate and validate the predictive power of bimolecular simulations through collaboration with the Center for Structural Genomics of Infectious Diseases.
Jeff Greeley, a materials scientist, was awarded 15 million hours of supercomputing time on Argonne's Intrepid to continue an investigation of materials at the nanoscale (a nanometer is one billionth of a meter). Greeley leads a collaboration seeking to understand the electronic and chemical properties of metal particles across the nanoscale regime.
"We expect to gain a comprehensive, first-principles-based picture of how the catalytic and electronic properties of a diverse array of metal nanoparticles evolve," he said. "Such information will ultimately assist in the design of enhanced nanocatalysts."
Ewing (Rusty) Lusk, director of Argonne's Mathematics and Computer Science Division, was awarded 5 million processor hours on the Intrepid to improve the performance and productivity of key system software components. Lusk heads a team investigating message-passing libraries, parallel input/output, data visualization and operating systems on high-performance computer systems. "Through rigorous experimentation, analysis and design cycles," he said, "we hope to dramatically enhance the capabilities not only of the current systems but of all systems pushing scalability limits in the near future."
Argonne researchers will also participate in six other INCITE projects. Four of the projects are new, and two are renewals.
The INCITE program was established by the U.S. Department of Energy Office of Science eight years ago to support computationally-intensive, large-scale research projects.
About Argonne National Laboratory
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America 's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.
Source: Argonne National Laboratory
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.