Visit additional Tabor Communication Publications
December 18, 2008
WASHINGTON, Dec. 18 -- The U.S. Department of Energy's (DOE) Office of Science announced today that 66 projects addressing some of the greatest scientific challenges have been awarded access to some of the world's most powerful supercomputers at DOE national laboratories. The projects -- competitively selected for their technical readiness and scientific merit -- will advance research in key areas such as astrophysics, climate change, new materials, energy production and biology, and thereby advance U.S. competitiveness.
"From understanding the makeup of our universe to protecting the quality of life here on earth, the computational science now possible using DOE's supercomputers touches all of our lives," said DOE Under Secretary for Science Raymond Orbach, who launched INCITE in 2003. "By dedicating time on these supercomputers to carefully selected projects, we are advancing scientific research in ways we could barely envision 10 years ago, improving our national competitiveness."
The allocations of supercomputing and data storage resources will be made under DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, which supports computationally intensive, large-scale research projects. After a selection process that included a peer-review of each proposal for scientific merit and computational readiness, nearly 900 million processor-hours are being awarded to 25 new projects and 41 renewal projects. Access to DOE's supercomputers will allow cutting-edge research to be carried out in weeks or months, rather than years or decades.
Over the past year, DOE supercomputing centers have dramatically increased the size of their systems, providing scientists with more computing time and allowing them to conduct more detailed and accurate simulations of scientific problems. The 2009 INCITE allocations awarded a total of 889 million processor hours -- more than three times the time allocated in 2008. Processor-hours refer to allocations of time on a supercomputer. A project receiving one million hours could run on 10,000 processors for 100 hours, or just over four days. Running a one-million-hour project on a dual-processor desktop computer would take more than 57 years.
For example, predicting global climate change typically involves running an ensemble of scientific models, as the earth's climate unfolds over virtual decades and centuries. The models also draw on massive sets of observed data, making such detailed research only possible on large-scale supercomputers. A decade ago, even the most powerful computers could only produce rudimentary climate models. Today, climate can be modeled at the 10-kilometer scale at the regional level, and can lead to better understanding of regional phenomena such as droughts and hurricanes.
Now in its sixth year, INCITE gives scientists at national laboratories and universities the tools they need to study complex physical and engineered systems. For example, life sciences researchers are using INCITE time to study protein folding to improve disease treatment and prevention and, in another project, to develop future biofuel sources. Chemistry researchers are using INCITE time to simulate combustion reactions to design cleaner, more efficient energy systems.
The INCITE awards will help advance research in accelerator physics, astrophysics, chemical sciences, climate research, computer science, engineering physics, environmental science, fusion energy, life sciences, materials science, nuclear physics, and nuclear engineering. Applications range from designing quieter cars to improving commercial aircraft design, from developing nanomaterials to simulating earthquakes. Fact sheets describing the projects can be found at http://www.sc.doe.gov/ascr/incite.
The projects will be awarded time at DOE's Leadership Computing Facilities at Oak Ridge National Laboratory in Tennessee and Argonne National Laboratory in Illinois, as well as the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory in California, and the Molecular Science Computing Facility at Pacific Northwest National Laboratory in Washington.
University researchers receiving INCITE awards are from Caltech; Colorado State University; Ecole Normale Superieure de Lyon; Georgetown University; New York University; Northwestern University; Purdue University; Rensselaer Polytechnic Institute; Stanford University; the University of Arizona; the University of California campuses at Davis, Los Angeles, Santa Barbara, San Diego and Santa Cruz; the University of Chicago; University College London; the University of Colorado; the University of Illinois, Urbana-Champaign; the University of Pennsylvania; the University of Rochester; the University of Washington; and the University of Wisconsin, Madison.
DOE scientists receiving awards conduct research at Argonne, Lawrence Berkeley, Oak Ridge and Pacific Northwest National Laboratories as well as National Energy Technology Laboratory, Princeton Plasma Physics Laboratory, Sandia National Laboratories, SLAC National Accelerator Laboratory and the Thomas Jefferson National Accelerator Facility.
Awards were also made to researchers at the National Center for Atmospheric Research; NASA's Goddard Space Flight Center; the National Oceanographic and Atmospheric Administration; the National Institute of Standards and Technology; the Southern California Earthquake Center; CERFACS, the European Center for Research and Advanced Training in Scientific Computation in France; and the Weizmann Institute of Science in Israel.
Industries receiving INCITE awards are: Corning Inc., Gene Network Sciences, General Atomics, General Motors, Pratt and Whitney, Procter and Gamble, and The Boeing Co.
DOE's Office of Science is the single largest supporter of basic research in the physical sciences for the nation and ensures U.S. leadership across a broad range of scientific disciplines. For more information about the Office of Science, visit www.science.doe.gov.
Source: U.S. Department of Energy
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.