Visit additional Tabor Communication Publications
August 04, 2011
“If mathematics is the language of science, computation is the workhorse.”
This simple, profound statement was found in a recent presentation discussing the capabilities offered by petascale systems and the possibilities and answers that lie buried within advanced simulations.
A number of initiatives have emerged in recent years to bring modeling and simulation opportunities to diverse groups of researchers, one of which is the Innovative and Novel Impact on Theory and Experiment (INCITE) program.
While such initiatives tend to make waves when they are first announced, it is always something of a treat to catch up with their progress over the course of a few years to see what revelations have spawned from access to vast computational and support resources.
INCITE, which is managed by Department of Energy's Argonne National Laboratory's Leadership Computing Facility (ALCF) and the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory, recently released a compilation report of their “greatest hits”—a string of research achievements powered by INCITE-granted access to petascale systems.
These “hits” include some of the noteworthy research endeavors that have been enabled by the INCITE program and serve as a testament to the power of advanced modeling and simulation capabilities. In the INCITE report from this month, the program leaders highlight major projects in areas as diverse as materials science, physics, chemistry, seismology and beyond.
The program has been encouraging scientific and technological advances since 2003 by awarding slots of time on supercomputers as well as associated data storage and movement services. As Dawn Levy reported, “Since 2008 the program has focused on leadership computing facilities, from which researchers can obtain the largest single-award time allocations available on powerful computing systems, including the OLCF’s Cray XT5 (Jaguar) with 224,256 processing cores yielding a peak performance of 2.33 thousand trillion calculations each second and the ALCF’s IBM Blue Gene/P (Intrepid) with 163,840 processing cores yielding a peak performance of 557 trillion calculations per second.”
According to Levy, “For the 2011 calendar year, 57 INCITE awardees received a total of 1.7 billion processor hours. The allocations averaged 27 million hours, with one project receiving more than 110 million hours. From INCITE’s inception through the end of 2011, researchers from academia, government laboratories, and industry will have been allotted more than 4.5 billion processor hours to speed innovations and discoveries.”
Via the 68-page report, INCITE leaders spelled out how advanced simulation is moving science along at an unprecedented rate. For example:
A team led by mechanical engineers from Sandia National Laboratories is using several million hours on the Jaguar supercomputer to simulate autoinjection and injection processes with alternative fuels. Making combustion more efficient could have a dramatic impact on the environment and industries reliant on natural gas and oil but without access to high-end systems for modeling and simulation, these developments might never have been possible.
Clean energy research is also a priority at Oak Ridge National Lab to help scientists understand, control and design processes for clean energy such as biomass conversion for energy production and supercapacitors for energy storage. Simulations are now solving the electronic structures of industrially important catalysts and device interfaces to accelerate breakthroughs in chemistry, nanotechnology and materials science.
As Robert Harrison, a computational chemist working on clean energy technologies at Oak Ridge National Lab stated, “Some of the largest calculations are only feasible on the leadership computers, not just because of speedy processors, but because of other architectural features—the amount of memory, the amount and speed of the disks, the speed and other characteristics of the interprocessor communication.”
Researchers from Argonne's Simulation-Based High-Efficiency Advanced Reactor Prototyping (SHARP) group are improving the safety and reliability of the next generation of nuclear reactors to provide a virtually carbon-free energy options. According to Argonne senior computational scientist, Paul Fischer, “Advanced simulation is viewed as critical in bringing new reactor technology to fruition in an economical and timely manner.”
Igor Tsigelny from the UCSD was one of several researchers interviewed for the INCITE “hit list” of noteworthy projects. Like many of his colleagues across the disciplinary spectrum he praised the role that advanced simulation is playing for research at the extreme scale. He noted in the report that “Thanks to the power of supercomputing, we are making major progress in understanding the origins of Parkinson's disease and developing ways to treat it.”
Physicist James Vary from Iowa State University, whose team is making use of computational time via the INCITE project said, “Simulations have come to the stage of development where they are so precise that they can actually predict with some accuracy experimental results that have not yet been obtained.”
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.