Visit additional Tabor Communication Publications
November 19, 2008
ARGONNE, Ill., Nov. 19 -- The U.S. Department of Energy's (DOE) Argonne National Laboratory has been named a winner of the annual High Performance Computing (HPC) Challenge Award at the Supercomputing 08 Conference in Austin, Texas.
"It is an honor to be recognized as a winner of the HPC Challenge," said Pete Beckman, director of Argonne's Leadership Computing Facility (ALCF). "This award proves that energy efficiency and computational power are not mutually exclusive. We can still push performance boundaries and deliver stellar results while using a fraction of the power typically needed for supercomputers."
Argonne was the clear winner in two of the four categories awarded in the HPC Challenge best performance benchmark competition, which were run using 32 racks of Argonne's Blue Gene/P.
Argonne's score of 103 GUPS (Giga Updates per Second) for Global RandomAccess was almost three times faster than last year's winner. Global RandomAccess measures memory performance and stresses traditional system bottlenecks that are directly correlated to application performance.
Argonne also won the Global FFT category, which measures the floating point rate of execution of double precision complex one-dimensional Discrete Fourier Transform, which is used to efficiently transform one function into another scoring 5080 Gflops.
The HPC Challenge is a suite of tests that examine the performance of high-end architectures using kernels with memory access patterns considered more challenging than those of the High Performance LINPACK benchmark used in determining the Top500 list and is sponsored by DARPA High Productivity Computing Systems Program and IDC. The goal of the competition is to focus the HPC community's attention on developing a broad set of HPC hardware and HPC software capabilities that are necessary to productively use HPC systems.
"The HPC Challenge provides an important benchmark for accelerating petascale computation for breakthrough science and engineering and will be an important measure as we begin to work towards the exascale," Beckman added.
The ALCF is home to DOE's Intrepid, a 40-rack IBM Blue Gene/P capable of a peak-performance of 557 Teraflops (557 trillion calculations per second). The Blue Gene/P features a low-power system-on-a-chip architecture and a scalable communications fabric that enables science applications to spend more time computing and less time moving data between CPUs, both reducing power demands and lowing operating costs.
As part of DOE's Innovative and Novel Computational Impact on Theory and Experiment program, the ALCF provides in-depth expertise and assistance in using ALCF systems and optimizing applications to help researchers from all different scientific disciplines to scale successfully to an unprecedented number of processors to solve some of our nation's most pressing technology challenges.
About the ALCF
The ALCF is a leadership-class computing facility that enables the research and development community to make innovative and high-impact science and engineering breakthroughs. Through the ALCF, researchers conduct computationally intensive projects on the largest possible scale. Argonne operates the ALCF for the DOE Office of Science as part of the larger DOE Leadership Computing Facility strategy. DOE leads the world in providing the most capable civilian supercomputers for science.
About Argonne National Laboratory
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America 's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.
Source: Argonne National Laboratory
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.