Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
November 18, 2013

LINPACK Creator Sheds Light on Emerging HPC Benchmark

Nicole Hemsoth

Back in June during the International Supercomputing Conference (ISC), we discussed the need for a potential alternative to the current LINPACK benchmark, which is the sturdy yardstick by which supercomputing might is measured, with its creator, Dr. Jack Dongarra.

At that time, he described a new benchmarking effort that is taking shape with the input of several collaborators, called the high performance conjugate gradient (HPCG) benchmark. The news about this effort drew a great deal of positive reaction from the scientific computing community in particular as it is more in tune with the types of modern and future simulations that are actually running on LINPACK top-ranked systems on the Top500. This new benchmark will be announced in further detail tomorrow (Tuesday) during the Top500 announcement and will be made available to be tested across a wider array of systems.

Dongarra says that while there are a few systems that have reported early numbers using the “alpha” version of this benchmark, it’s not time to think about replacing LINPACK just yet. He says that HPCG will undergo many tweaks over the next couple of years before it’s ready for primetime. It has already been distributed in its early form to the vendor community to test and comment upon, leading to a number of valuable insights about more alterations and again, will be put to the test of a wider set of systems once the code is opened to more users beginning after SC13.

But the community needs to start somewhere, especially when it comes to kickstarting the process of moving from a benchmark that emphasizes the floating point capabilities that were a key factor in systems from 20 years ago. LINPACK, which began its evolution in the late 1970s, does measure how well the CPU is able to do floating point arithmetic, but it doesn’t really capture what’s happening with the rest of the machine—especially in those more pressured parts involved with data movement, most notably the interconnect network.

Interestingly though, what happens with such a benchmark is that the performance that is being reported differs quite dramatically from LINPACK performance results. Dongarra said they’re seeing around a factor of 40 or 50 between the performance on the LINPACK benchmark and the performance along the HPCG benchmark—and that performance isn’t quite the same sexy result we see with the Top500 and its flops-centric approach.

As Dongarra told us, when it comes to benchmarking according to real application needs, “It’s not only about the CPU, but it’s also very much about the interconnect; you might be able to do floating point rapidly but if you don’t have the ability to move data quickly that’s going to show up in terms of the performance that we see. “You have to remember that LINPACK came about in the late 1970s and then floating point performance was a critical thing on those CPUs and was one of the more expensive elements. But today our machines are overprovisioned for doing floating point arithmetic; it’s a fast operation and represents a relatively smaller amount of the overall time to do some of these simulations—it’s the data and communication so hence the focus.”

Dongarra explains that it’s less, but that’s because HPCG is more reflective of the kind of operations we see in typical problems used in simulation today. For instance, think of a simulation that’s being modeled through a partial differential equation where the solution requires a sparse matrix problem to be solved. With the new benchmark, it’s the sparse matrix problem that’s the focus. This is quite different than the problem that was used in the original LINPACK as that dealt with dense matrix problems.

Right now Dongarra, his colleague Dr. Michael Heroux and others are working to hash out major kinks with the benchmark based on early results of tests. These include some critical components that are most often pointed out, including the pre-conditioner. But all of this is par for the course, says Dongarra.

“For what it’s worth LINPACK evolved over many years so there were adjustments over many years to today where we have something that has a historical basis we can point to and feel good about. We even still adjust things in LINPACK. For instance, one of the criteria that’s used in the original LINPACK benchmark to see if the correct answer is achieved, we’re noticing that number isn’t as good as it should be when we have very large matrices so we’re making an adjustment there to get it right.”

He concluded, noting “I would say it more accurately reflects the applications we do today and tomorrow, whereas LINPACK represents the applications of the 1980s,” said the LINPACK pioneer. “One of the concerns here is that manufacturers want to get a good Top500 rating so they look at their architecture and focus architectural features on LINPACK they’re not going to get good performance on today’s large-scale simulation problems. That’s one of the reasons we’re doing this, to refocus or refactor how we design machines to deal with today’s problems.”

For those interested and present in Denver for SC13, you can hear more during Tuesday’s BoF presentation on the LINPACK results and the future of this benchmark.