Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
September 9, 2005

DARPA, HPCwire Announce HPC Challenge

Nicole Hemsoth

The DARPA High Productivity Computing Systems Program and HPCwire are pleased to announce the first annual HPC Challenge Award Competition (www.hpcchallenge.org). The goal of the competition is to focus the HPC community's attention on developing a broad set of HPC hardware and software capabilities that are necessary to productively use HPC systems.

The awards session will be held during the SC|05 conference, on Tuesday, Nov. 15, during a noon Birds of a Feather session. The core of the HPC Challenge Award Competition is the HPC Challenge benchmark suite developed at the University of Tennessee under the DARPA HPCS program with input from a range of organizations around the world (see http://icl.cs.utk.edu/hpcc/).

“Today, things are much more complicated then they were when the Linpack Benchmark was first developed some 30 years ago,” said Jack Dongarra, from the Innovative Computing Laboratory at the University of Tennessee and a co-chair of the HPCC awards committee. “With the HPCS Benchmark and HPC Challenge Awards, we are setting the standards for benchmarking methodology and result-reporting together with a control database/repository for both the benchmarks and the results.”

The competition will focus on four of the most challenging benchmarks in the suite:

  • Global HPL;
  • Global RandomAccess;
  • EP STREAM (Triad) per system;
  • Global FFT.

For the HPCC awards, there will be two classes:

Class 1: Best Performance (Four awards: $500 each)

Best performance on a base or optimized run submitted to the HPC Challenge web site. The benchmarks to be judged are: Global HPL, Global RandomAccess, EP STREAM (Triad) per system and Global FFT. The prize will be $500 plus a certificate for the best of each.

Class 2: Most Productivity (One award: $1,500; may be split)

Most “elegant” implementation of two or more of the HPC Challenge benchmarks with special emphasis being placed on: Global HPL, Global RandomAccess, EP STREAM (Triad) per system and Global FFT. This award will be weighted 50 percent on performance and 50 percent on code elegance, clarity and size. Both will be determined by an evaluation committee. For this award, the implementers must submit to hpcc-awards@cs.utk.edu (by Oct. 15, 2005). Please include a short description of:

  • the implementation;
  • the performance achieved;
  • lines of code;
  • and the actual source code of the implementation.

The evaluation committee will select a set of finalists who will be invited to give short presentations at the HPC Challenge Award BOF at SC|05. This presentation will be judged by the evaluation committee to select the winner. The prize, which consists of $1,500 plus a certificate, may be split among the “best” entries.

The Class 1 awards are decided on benchmark results and should be clearcut. Benchmark results will be accepted up to the last moment.

The Class 2 award is more subjective. It will work as follows: The early bird entry deadline to get feedback on submissions is Oct. 1. Feedback will be provided by Oct. 7, so submissions can be improved. This is to help with compliance of the rules. Interested parties only get one shot at an early bird submission (not iterative).  Deadline is Oct. 15.

The awards committee will choose three finalists. Each must make a presentation at the SC|05 session. The winner will be chosen at the session and the prize will be given there.

For more information or questions on the HPCC Challenge Awards, contact hpcc-awards@cs.utk.edu.

The awards committee consists of David Bailey, LBNL NERSC; Jack Dongarra, (co- chair) University of Tennessee/ORNL; Jeremy Kepner, (co-chair) MIT Lincoln Lab; David Koester, MITRE; Bob Lucas, ISI; Rusty Lusk, Argonne National Lab; Piotr Luszczek, University of Tennessee; John McCalpin, IBM Austin; Rolf Rabenseifner, HLRS, Stuttgart; Daisuke Takahashi, University of Tsukuba.

The HPC Challenge Benchmark is supported by DARPA, DOE, NSF and HPCwire.