Visit additional Tabor Communication Publications
November 20, 2008
'Jaguar' demonstrates its power and ease of use.
OAK RIDGE, Tenn., Nov. 20 -- A Cray XT5 supercomputer named Jaguar that runs scientific applications at the U.S. Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) placed in three out of four categories at the High-Performance Computing (HPC) Challenge awards, winning two "gold medals" and one "bronze" in this head-to-head competition. Results of the challenge, which measures excellence at handling computing workloads, were announced Nov. 18 in Austin at SC08, an international gathering of supercomputing professionals.
Jaguar won first place for both speed in solving a dense matrix of linear algebra equations (running a software code called High-Performance Linpack, or HPL) and sustainable memory bandwidth?or how many gigabytes per second a node can fetch and store (running the STREAM code). It won third place for speed in executing the Global-Fast Fourier Transformation, a common algorithm used in many scientific applications.
"The Cray Jaguar at ORNL winning two of the HPC Challenge benchmarks shows the power and potential of the computer system for handling some of the most challenging computational science problems," said Jack Dongarra of University of Tennessee-Knoxville and Oak Ridge National Laboratory. "It was able to produce an impressive 902 teraflops [trillion floating point operations per second] on HPL and 330 TB/s [terabytes per second] on STREAMS. Both results leave the second-place IBM Blue Gene/L at Lawrence Livermore National Laboratory far behind and demonstrates the balance between computing and communication bandwidth."
ORNL, along with Cray’s Chapel team led by Brad Chamberlain, shared another award for the most elegant implementation of the HPC Challenge benchmark applications in Cray's Chapel computer language.
John Levesque, director of the Cray Supercomputing Center of Excellence at ORNL, said the HPC Challenge, sponsored by the Defense Advanced Research Projects Agency High Productivity Computing Systems Program, supports hardware and software development needed to effectively use petascale computers, which can execute quadrillions of calculations each second. Similar to the way a decathlon measures performance in ten track and field events, the HPC Challenge measures a computer's ability to excel in execution of a wide variety of components important to running scientific applications.
All of the benchmarks were run in two modes: baseline (no source-code modifications) and optimized (significant source-code modifications). Baselines demonstrate a machine's overall performance and ease of use, whereas optimizations boost performance on one specific aspect of computation. ORNL submitted baselines for Jaguar reflecting the ease of use of the system. In the list of the baseline results, Jaguar ranked as the most powerful machine in three of the four categories and ranked second in the final category, according to the posted results.
"The fact that Jaguar won these awards and placed so highly on the four major benchmarks with the baseline run attests to the superior performance and balance of the system," said Buddy Bland, project director for ORNL's Leadership Computing Facility, which hosts Jaguar. "This is truly a remarkable machine. It is exceptionally powerful in every measure that is important to the scientists who use this machine. Because it is a general-purpose computer that is easy to use, the scientists using this machine have been able to set new performance records on a wide range of science problems in just its first week of availability."
For more information about the HPC Challenge benchmarks, see http://www.hpcchallenge.org.
For more information on Jaguar, see http://www.nccs.gov/jaguar.
Source: Dawn Levy, Oak Ridge National Laboratory
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.