Visit additional Tabor Communication Publications
March 07, 2011
Oak Ridge National Lab, which already hosts two petascale supercomputers, is planning to add another to its elite stable of HPC machines. According to a news report in the Knoxville News Sentinel, the DOE lab will begin installing a new 20 petaflop supercomputer, named "Titan," in late 2011, with the complete system ready to boot up sometime in 2012.
According to Jeff Nichols, the associate lab director for ORNL’s Computing and Computational Sciences group, the initial 2011 installation will be used as a test-bed before the full system is in place. When installed, Titan will dwarf the lab's current top-tier number-crunchers, the 2.3 petaflop Jaguar and the 1.0 petaflop Kraken, by a hefty margin. Both of those systems are Cray XT5 machines.
As you might expect from Oak Ridge, Titan will also be a Cray supercomputer, in this case, a yet-to-be-released GPU-accelerated machine that will use NVIDIA Tesla parts to deliver most of the FLOPS. Although the Knoxville News Sentinel report didn't specify the actual system, it will likely be an XE6 variant with Tesla GPU-equipped blades, which Cray has said it will launch later this year.
Nichols told Knoxville News that entire system will cost about $100 million. That is probably going to be quite a bit less expensive than the DOE's other 20 petaflop system: the IBM Blue Gene/Q "Sequoia" supercomputer. The price tag for that machine hasn't been disclosed, although the 10 petaflop IBM Blue Waters system at NCSA will run more than $200 million.
Like Titan, Sequoia is slated for initial delivery later this year, with the full set-up completed in 2012. That should make for any interesting match-up. For one thing, unless the Chinese fund another big system next year (which is certainly not out of the question), the two DOE machines will vie for supercomputing supremacy in 2012.
Assuming both machines deliver 20 peak petaflops, it's more likely that the Blue Gene/Q Sequoia will take the TOP500 (Linpack) title that year. Like most top-of-the-line CPU-based supers, Blue Genes yield something north of 80 percent of peak FLOPS for Linpack, while the GPU accelerated machines are only delivering 50 percent of peak on Linpack. Maybe yields will be better by 2012, but probably won't match the CPU-based supers.
For a number of reasons, the Sequoia machine is also more likely to deliver a better performance per watt metric than Titan. In the latest Green500 list, a Blue Gene/Q prototype was about 75 percent more efficient than the third ranked TSUBAME 2.0 GPU-equipped super. (The number two Green500 system was a special-purpose GRAPE-DR supercomputer). Some of that has to do with better Linpack performance as mentioned above, but the Blue Gene technology, in general, tends to be quite energy efficient thanks to its custom integration and SoC PowerPC architecture.
The performance match-up will be for bragging rights only since the two DOE machines won't really be competing for applications. Sequoia's primary duty will be to run classified nuclear weapons simulations for the NNSA's Stockpile Stewardship program, although it will be available part-time for science apps like astronomy, energy, genomics, and climatology.
Titan, on the other hand, will be dedicated to running a wide variety of open science application. Presumably Titan will also end up in the INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program, which means a number of academic and commercial users will get a chance to play on the first multi-petaflop GPU supercomputer in the US.
Full story at Knoxville News Sentinel
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.