Visit additional Tabor Communication Publications
December 03, 2010
BERKELEY, Calif., Nov. 30 -- Scientists at the Department of Energy's (DOE) Lawrence Berkeley National Laboratory (Berkeley Lab) have been awarded massive allocations on the nation's most powerful supercomputer to advance innovative research in improving the combustion of hydrogen fuels and increasing the efficiency of nanoscale solar cells. The awards were announced today (Tuesday, Nov. 30) by Energy Secretary Steven Chu as part of DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.
The INCITE program selected 57 research projects that will use supercomputers at Argonne and Oak Ridge national laboratories to create detailed scientific simulations to perform virtual experiments that in most cases would be impossible or impractical in the natural world. The program allocated 1.7 billion processor-hours to the selected projects. Processor-hours refer to how time is allocated on a supercomputer. Running a 10-million-hour project on a laptop computer with a quad-core processor would take more than 285 years.
"The Department of Energy's supercomputers provide an enormous competitive advantage for the United States," said Secretary Chu. "This is a great example of how investments in innovation can help lead the way to new industries, new jobs, and new opportunities for America to succeed in the global marketplace."
Reducing Dependence on Fossil Fuels
One strategy for reducing U.S. dependence on petroleum is to develop new fuel-flexible combustion technologies for burning hydrogen or hydrogen-rich fuels obtained from a gasification process. John Bell and Marcus Day of Berkeley Lab's Center for Computational Sciences and Engineering, were awarded 40 million hours on the Cray supercomputer "Jaguar" at the Oak Ridge Leadership Computing Facility (OLCF) for "Simulation of Turbulent Lean Hydrogen Flames in High Pressure" to investigate the combustion chemistry of such fuels.
Hydrogen is a clean fuel that, when consumed, emits only water and oxygen making it a potentially promising part of our clean energy future. Researchers will use the Jaguar supercomputer to better understand how hydrogen and hydrogen compounds could be used as a practical fuel for transportation and power generation.
Nanomaterials Have Big Solar Energy Potential
Nanostructures, tiny materials 100,000 times finer than a human hair, may hold the key to improving the efficiency of solar cells -- if scientists can gain a fundamental understanding of nanostructure behaviors and properties. To better understand and demonstrate the potential of nanostructures, Lin-Wang Wang of Berkeley Lab's Materials Sciences Division was awarded 10 million hours on the Cray supercomputer at OLCF. Wang's project is "Electronic Structure Calculations for Nanostructures."
Currently, nanoscale solar cells made of inorganic systems suffer from low efficiency, in the range of 1-3 percent. In order for the nanoscale solar cells to have an impact in the energy market, their efficiencies must be improved to more than 10 percent. The goal of Wang's project is to understand the mechanisms of the critical steps inside a nanoscale solar cell, from how solar energy is absorbed, then converted into usable electricity. Although many of the processes are known, some of the corresponding critical aspects of the nano systems are still not well understood.
Because Wang studies systems with 10,000 atoms or more, he relies on large-scale allocations such as his INCITE award to advance his research. To make the most effective use of his allocations, Wang and collaborators developed the Linearly Scaling Three Dimensional Fragment (LS3DF) method. This allows Wang to study systems that would otherwise take over 1,000 times longer on even the biggest supercomputers using conventional simulation techniques. LS3DF won an ACM Gordon Bell Prize in 2008 for algorithm innovation.
Advancing Supernova Simulations
Berkeley Lab's John Bell is also a co-investigator on another INCITE project, "Petascale Simulations of Type Ia Supernovae from Ignition to Observables." The project, led by Stan Woosley of the University of California-Santa Cruz, uses two supercomputing applications developed by Bell's team -- MAESTRO, to model the convective processes inside certain stars in the hours leading up to ignition -- and CASTRO to model the massive explosions known as Type Ia supernovas. The project received 50 million hours on the Cray supercomputer at OLCF.
Type Ia supernovae (SN Ia) are the largest thermonuclear explosions in the modern universe. Because of their brilliance and nearly constant luminosity at peak, they are also a "standard candle" favored by cosmologists to measure the rate of cosmic expansion. Yet, after 50 years of study, no one really understands how SN Ia work. This project aims to use these applications to model the beginning-to-end processes of these exploding stars.
Read more about the INCITE program at http://www.energy.gov/news/9834.htm.
About Berkeley Lab
Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California. Visit its website at www.lbl.gov.
Source: Lawrence Berkeley National Laboratory
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.