Visit additional Tabor Communication Publications
December 18, 2008
ARGONNE, Ill., Dec. 18 -- Based on their potential for breakthroughs in science and engineering research, 28 projects have been awarded 400 million hours of computing time at Argonne's Leadership Computing Facility (ALCF) through the Department of Energy's (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.
The awards are part of a competitively selected group of 66 scientific projects announced by DOE's Office of Science (SC). INCITE is a DOE program supported by SC's Office of Advanced Scientific Computing Research that provides access to computing power and resources to support computationally intensive, large-scale research projects to researchers from industry, academia, and government research facilities.
"From understanding the makeup of our universe to protecting the quality of life here on earth, the computational science now possible using DOE's supercomputers touches all of our lives," said DOE Under Secretary for Science Dr. Raymond L. Orbach, who launched INCITE in 2003. "By dedicating time on these supercomputers to carefully selected projects, we are advancing scientific research in ways we could barely envision 10 years ago, improving our national competitiveness."
"INCITE is critical for advancing our nation's scientific leadership, but it also impacts our competitiveness and standard of living," said Argonne Director Robert Rosner. "The research addresses society's concerns about healthcare, the environment, climate change, creating clean and efficient energy, all while reducing time-to-market and prototyping costs through advanced simulation and modeling that would not be possible without facilities like ours."
Some of the new INCITE awards at Argonne include investigating the circulation of water in the deep sea for storing CO2 and another that will use computer simulations to conduct cerebral blood flow experiments -- instead of potentially dangerous work on actual patients -- to study cerebral blood flow and its role in the understanding, diagnosing and treatment cardiovascular disease. Other new and returning projects feature research in:
"The INCITE program goes beyond providing access to supercomputers. A key aspect of the program is expanding the horizons of scientific thinking by connecting researchers with scientific and technical staff at DOE's computing facilities," said Pete Beckman, director of Argonne's Leadership Computing Facility. "Future breakthroughs will stem from the fusion and knowledge of different fields applying high performance computing and multi-disciplinary science."
Of the 28 INCITE projects that will use the energy-efficient Blue Gene/P at Argonne, 10 are new projects and 18 are projects renewed from 2008. The ALCF is home to DOE's Intrepid, a 40-rack IBM Blue Gene/P capable of a peak-performance of 557 Teraflops (557 trillion calculations per second). The Blue Gene/P features a low-power, system-on-a-chip architecture and a scalable communications fabric that enables science applications to spend more time computing and less time moving data between CPUs, both reducing power demands and lowering operating costs.
As part of DOE's Innovative and Novel Computational Impact on Theory and Experiment program, the ALCF provides in-depth expertise and assistance in using ALCF systems and optimizing applications to help researchers from all different scientific disciplines to scale successfully to an unprecedented number of processors to solve some of our nation's most pressing technology challenges.
Over the past 30 years, the Department of Energy's (DOE) supercomputing program has played an increasingly important role in scientific research by allowing scientists to create more accurate models of complex processes, simulate problems once thought to be impossible, and to analyze the increasing amount of data generated by experiments.
To advance scientific discovery, DOE supports a portfolio of national high performance computing facilities and has allocated nearly 900 million processor-hours for supercomputing and data storage resources located at Argonne, Oak Ridge, Pacific Northwest and Lawrence Berkeley national laboratories.
To read more about all of the INCITE research taking place at Argonne's Leadership Computing Facility, visit http://www.alcf.anl.gov/collaborations/incite.php.
To read the Department of Energy's INCITE announcement, visit http://www.energy.gov/news/6804.htm.
For more information on the INCITE program, visit http://www.sc.doe.gov/ascr/incite/index.html.
About Argonne National Laboratory
The U.S. Department of Energy's Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.
Source: Argonne National Laboratory
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.