Visit additional Tabor Communication Publications
December 18, 2008
ARGONNE, Ill., Dec. 18 -- Based on their potential for breakthroughs in science and engineering research, 28 projects have been awarded 400 million hours of computing time at Argonne's Leadership Computing Facility (ALCF) through the Department of Energy's (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.
The awards are part of a competitively selected group of 66 scientific projects announced by DOE's Office of Science (SC). INCITE is a DOE program supported by SC's Office of Advanced Scientific Computing Research that provides access to computing power and resources to support computationally intensive, large-scale research projects to researchers from industry, academia, and government research facilities.
"From understanding the makeup of our universe to protecting the quality of life here on earth, the computational science now possible using DOE's supercomputers touches all of our lives," said DOE Under Secretary for Science Dr. Raymond L. Orbach, who launched INCITE in 2003. "By dedicating time on these supercomputers to carefully selected projects, we are advancing scientific research in ways we could barely envision 10 years ago, improving our national competitiveness."
"INCITE is critical for advancing our nation's scientific leadership, but it also impacts our competitiveness and standard of living," said Argonne Director Robert Rosner. "The research addresses society's concerns about healthcare, the environment, climate change, creating clean and efficient energy, all while reducing time-to-market and prototyping costs through advanced simulation and modeling that would not be possible without facilities like ours."
Some of the new INCITE awards at Argonne include investigating the circulation of water in the deep sea for storing CO2 and another that will use computer simulations to conduct cerebral blood flow experiments -- instead of potentially dangerous work on actual patients -- to study cerebral blood flow and its role in the understanding, diagnosing and treatment cardiovascular disease. Other new and returning projects feature research in:
"The INCITE program goes beyond providing access to supercomputers. A key aspect of the program is expanding the horizons of scientific thinking by connecting researchers with scientific and technical staff at DOE's computing facilities," said Pete Beckman, director of Argonne's Leadership Computing Facility. "Future breakthroughs will stem from the fusion and knowledge of different fields applying high performance computing and multi-disciplinary science."
Of the 28 INCITE projects that will use the energy-efficient Blue Gene/P at Argonne, 10 are new projects and 18 are projects renewed from 2008. The ALCF is home to DOE's Intrepid, a 40-rack IBM Blue Gene/P capable of a peak-performance of 557 Teraflops (557 trillion calculations per second). The Blue Gene/P features a low-power, system-on-a-chip architecture and a scalable communications fabric that enables science applications to spend more time computing and less time moving data between CPUs, both reducing power demands and lowering operating costs.
As part of DOE's Innovative and Novel Computational Impact on Theory and Experiment program, the ALCF provides in-depth expertise and assistance in using ALCF systems and optimizing applications to help researchers from all different scientific disciplines to scale successfully to an unprecedented number of processors to solve some of our nation's most pressing technology challenges.
Over the past 30 years, the Department of Energy's (DOE) supercomputing program has played an increasingly important role in scientific research by allowing scientists to create more accurate models of complex processes, simulate problems once thought to be impossible, and to analyze the increasing amount of data generated by experiments.
To advance scientific discovery, DOE supports a portfolio of national high performance computing facilities and has allocated nearly 900 million processor-hours for supercomputing and data storage resources located at Argonne, Oak Ridge, Pacific Northwest and Lawrence Berkeley national laboratories.
To read more about all of the INCITE research taking place at Argonne's Leadership Computing Facility, visit http://www.alcf.anl.gov/collaborations/incite.php.
To read the Department of Energy's INCITE announcement, visit http://www.energy.gov/news/6804.htm.
For more information on the INCITE program, visit http://www.sc.doe.gov/ascr/incite/index.html.
About Argonne National Laboratory
The U.S. Department of Energy's Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.
Source: Argonne National Laboratory
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.