Visit additional Tabor Communication Publications
November 20, 2008
'Jaguar' demonstrates its power and ease of use.
OAK RIDGE, Tenn., Nov. 20 -- A Cray XT5 supercomputer named Jaguar that runs scientific applications at the U.S. Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) placed in three out of four categories at the High-Performance Computing (HPC) Challenge awards, winning two "gold medals" and one "bronze" in this head-to-head competition. Results of the challenge, which measures excellence at handling computing workloads, were announced Nov. 18 in Austin at SC08, an international gathering of supercomputing professionals.
Jaguar won first place for both speed in solving a dense matrix of linear algebra equations (running a software code called High-Performance Linpack, or HPL) and sustainable memory bandwidth?or how many gigabytes per second a node can fetch and store (running the STREAM code). It won third place for speed in executing the Global-Fast Fourier Transformation, a common algorithm used in many scientific applications.
"The Cray Jaguar at ORNL winning two of the HPC Challenge benchmarks shows the power and potential of the computer system for handling some of the most challenging computational science problems," said Jack Dongarra of University of Tennessee-Knoxville and Oak Ridge National Laboratory. "It was able to produce an impressive 902 teraflops [trillion floating point operations per second] on HPL and 330 TB/s [terabytes per second] on STREAMS. Both results leave the second-place IBM Blue Gene/L at Lawrence Livermore National Laboratory far behind and demonstrates the balance between computing and communication bandwidth."
ORNL, along with Cray’s Chapel team led by Brad Chamberlain, shared another award for the most elegant implementation of the HPC Challenge benchmark applications in Cray's Chapel computer language.
John Levesque, director of the Cray Supercomputing Center of Excellence at ORNL, said the HPC Challenge, sponsored by the Defense Advanced Research Projects Agency High Productivity Computing Systems Program, supports hardware and software development needed to effectively use petascale computers, which can execute quadrillions of calculations each second. Similar to the way a decathlon measures performance in ten track and field events, the HPC Challenge measures a computer's ability to excel in execution of a wide variety of components important to running scientific applications.
All of the benchmarks were run in two modes: baseline (no source-code modifications) and optimized (significant source-code modifications). Baselines demonstrate a machine's overall performance and ease of use, whereas optimizations boost performance on one specific aspect of computation. ORNL submitted baselines for Jaguar reflecting the ease of use of the system. In the list of the baseline results, Jaguar ranked as the most powerful machine in three of the four categories and ranked second in the final category, according to the posted results.
"The fact that Jaguar won these awards and placed so highly on the four major benchmarks with the baseline run attests to the superior performance and balance of the system," said Buddy Bland, project director for ORNL's Leadership Computing Facility, which hosts Jaguar. "This is truly a remarkable machine. It is exceptionally powerful in every measure that is important to the scientists who use this machine. Because it is a general-purpose computer that is easy to use, the scientists using this machine have been able to set new performance records on a wide range of science problems in just its first week of availability."
For more information about the HPC Challenge benchmarks, see http://www.hpcchallenge.org.
For more information on Jaguar, see http://www.nccs.gov/jaguar.
Source: Dawn Levy, Oak Ridge National Laboratory
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.