Visit additional Tabor Communication Publications
October 29, 2012
OAK RIDGE, T.N., Oct. 29 – The U.S. Department of Energy’s (DOE’s) Leadership Computing Facilities (LCFs) have awarded a combined 4.7 billion supercomputing core hours to 61 science and engineering projects with high potential for accelerating discovery and innovation through its Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. The allocation of resources at Argonne and Oak Ridge national laboratories, enabled by the deployment of next-generation high-performance computers, will provide researchers in industry, academia, and government access to some of the world’s most powerful supercomputers to address grand challenges from advancing sustainable energy to understanding environmental consequences of energy use.
“Supercomputing is essential to solving our greatest scientific and technological challenges and improving our economic prosperity, energy security, and global competitiveness,” said James Hack, director of the National Center for Computational Sciences (NCCS), which houses the Oak Ridge Leadership Computing Facility (OLCF) in Oak Ridge, Tenn. “Simulations that exploit massive parallelism play a critical role in building our future. Today’s awards will speed insights into the natural world from subatomic particles to earthquakes to supernovae, and the engineered world from cars and concrete to catalysts and computer chips.”
Added Michael Papka, director of the Argonne Leadership Computing Facility (ALCF) just outside Chicago, “The 2013 INCITE awards will accelerate breakthroughs by allowing greater complexity and realism in simulations, from carbon sequestration underground to turbulent combustion in power and propulsion devices.”
The ALCF’s newest leadership computing resource is Mira, a 10-petaflop IBM Blue Gene/Q system with 49,152 compute nodes and a power-efficient architecture. The ALCF also houses Intrepid, a 557-teraflop IBM Blue Gene/P. The OLCF is home to Titan, a new 20-petaflop Cray XK7 hybrid system employing both central processing units and energy-efficient, high-performance graphics processing units in its 18,688 compute nodes. For 2013, the INCITE program, which is jointly managed by DOE’s LCFs, awarded 2.83 billion hours at the ALCF and 1.84 billion core hours at the OLCF on systems capable of carrying out quadrillions of calculations each second (petaflops).
When INCITE made its first awards in 2004, three projects received an aggregate 5 million hours on DOE supercomputers. Today’s collective allocation of 4.7 billion hours represents an almost 1,000-fold growth in resources provided to researchers. To date, INCITE has delivered more than 10 billion computing hours to the scientific community.
This announcement coincides with Oak Ridge National Laboratory's unveiling of Titan, a system capable of churning through more than 20,000 trillion calculations each second—or 20 petaflops—by employing a family of processors, called graphics processing units (GPUs), first created for computer gaming. Titan will be 10 times more powerful than ORNL’s last world-leading system, Jaguar, while overcoming power and space limitations inherent in the previous generation of high-performance computers.
Titan, which is supported by the Department of Energy, will provide unprecedented computing power for research in energy, climate change, efficient engines, materials, and other disciplines and will pave the way for a wide range of achievements in science and technology.
The Cray XK7 system contains 18,688 nodes, with each holding a 16-core AMD Opteron 6274 processor and an NVIDIA Tesla K20 GPU accelerator. Titan also has more than 700 terabytes of memory. The combination of central processing units (CPUs), the traditional foundation of high-performance computers, and more recent GPUs will allow Titan to occupy the same space as its Jaguar predecessor while using only marginally more electricity.
“One challenge in supercomputers today is power consumption,” said Jeff Nichols, associate laboratory director for computing and computational sciences. “Combining GPUs and CPUs in a single system requires less power than CPUs alone and is a responsible move toward lowering our carbon footprint. Titan will provide unprecedented computing power for research in energy, climate change, materials, and other disciplines to enable scientific leadership.”
Because they handle hundreds of calculations simultaneously, GPUs can go through many more than CPUs in a given time. By relying on its 299,008 CPU cores to guide simulations and allowing its new NVIDIA GPUs to do the heavy lifting, Titan will enable researchers to run scientific calculations with greater speed and accuracy.
“Titan will allow scientists to simulate physical systems more realistically and in far greater detail,” said James Hack, director of ORNL’s National Center for Computational Sciences. “The improvements in simulation fidelity will accelerate progress in a wide range of research areas such as alternative energy and energy efficiency, the identification and development of novel and useful materials, and the opportunity for more advanced climate projections.”
The S3D application models the underlying turbulent combustion of fuels in an internal combustion engine. This line of research is critical to the American energy economy, given that three-quarters of the fossil fuel used in the United States goes to powering cars and trucks, which produce one-quarter of the country’s greenhouse gases.
Titan will allow researchers to model large-molecule hydrocarbon fuels such as the gasoline surrogate isooctane; commercially important oxygenated alcohols such as ethanol and butanol; and biofuel surrogates that blend methyl butanoate, methyl decanoate, and n-heptane.
Nuclear researchers use the Denovo application to, among other things, model the behavior of neutrons in a nuclear power reactor. America’s aging nuclear power plants provide about a fifth of the country’s electricity, and Denovo will help them extend their operating lives while ensuring safety. Titan will allow Denovo to simulate a fuel rod through one round of use in a reactor core in 13 hours; this job took 60 hours on the Jaguar system.
The Community Atmosphere Model–Spectral Element simulates long-term global climate. Improved atmospheric modeling under Titan will help researchers better understand future air quality as well as the effect of particles suspended in the air.
About DOE’s Office of Science: The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time.
About INCITE: The Innovative and Novel Computational Impact on Theory and Experiment program promotes transformational advances in science and technology through large allocations of time on state-of-the-art supercomputers.
About America’s Leadership Computing Facilities: The U.S. Department of Energy’s Leadership Computing Facilities, located at Oak Ridge and Argonne National Laboratories, house some of the world’s most advanced supercomputers to accelerate scientific discovery and engineering innovation.
Source: Oak Ridge National Laboratory
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.