Over the last few years in particular, discussions about the limitations of the current measurement for the Top500, the LINPACK benchmark, have echoed louder. The result of these lines of questioning have led to a new benchmarking possibility–one that takes real world application needs and current architectural and system design trends into greater account.
LINPACK creator, Jack Dongarra, in addition to Michael Heroux at Sandia National Laboratory, have addressed supercomputing benchmark limitations with the new HPCG benchmark, which we’ve detailed a number of times. Now that the benchmark is in full swing with vendor and community action tied around it, new questions have emerged about its viability, reliability and progress to date. There has been some confusion around it, says Dongarra, who took time to explain some of the issues as the new year gets underway.
As it stands, there have been HPCG benchmark results reported from about a dozen leadership-class systems and a significant amount of vendor and community involvement to continue to refine the benchmark. However, Dongarra says he left SC13 with the impression that there was some confusion about status of HPCG after several people approached him to ask where the new list could be found. For the record, there is no fresh list of pure HPCG results—and we should not expect one soon.
He stressed that HPCG is an evolving effort and an increased push will be underway over the next several months before the new LINPACK results are in. The idea that hundreds of machines will have results co-listed on the next Top500 is not correct; again, only a relatively small number of systems have reported their results but he expects growth—over a significant period of time. The current goal is to hope that there are several machines that report both LINPACK and HPCG results on successive lists, “but it will be a number of years before we have both results for hundreds of machines,” he said.
“We’re learning as we’re running it and we’re adjusting how it’s presented,” Dongarra explained. “Today, in some sense, we have a beta version and it’s going to be refined over the next six months and at that point we’ll have something we’re happier with and that users are going to be happy with.” He detailed how the vendor and user community have given valuable feedback and encouragement, but noted that their biggest challenge (beyond the general misunderstanding about how widespread results might be) is a lack of understanding about the benchmark and its potential misuse.
Dongarra explained that the benchmark probes a number of features of a system—many more than the current LINPACK standard. In order to be effective, however, just as with LINPACK, there are necessary optimizations that require a comprehensive understanding of both the benchmark and its rules as well as the underlying system architecture. “If you just take the reference implementation for LINPACK and ran that, you wouldn’t get the high performance and it wouldn’t measure what we intend to measure.” Optimizations are critical—some of those who have been critical of HPCG (notably those who say that it is just like the STREAM benchmark) have not optimized appropriately, argues Dongarra. He notes that running LINPACK according the base reference implementation would also produce a STREAM-like result. The meat of the results are stuck to the bones of those critical first steps.
Optimization, as alluded to previously, is not necessary a simple task since certain parts of the benchmark need to be worked on and written for a given machine or architecture. Dongarra and team have reports that describe how it’s done, where the critical parts of the benchmark are, but the success of obtaining results from HPCG depends on having knowledge of the hardware and benchmark. Comparatively, LINPACK measures a smaller set of factors, but there are more components that will ultimately expose more about the architecture, hence the added effort. Both of these things take time—just as the evolution of LINPACK took a great deal of time to mature and round out with a knowledge bank.
According to Michael Heroux, a co-author of the HPCG benchmark, every major vendor except AMD has paid close attention to the new measurement and offered substantial feedback. Dongarra and others have been steadily working with the vendor community to help them understand what to expect, what the rules are, how to optimize, in addition to their post-optimization feedback rounds on bugs and other problems.
Several vendors across the HPC spectrum have plenty of cause to help the HPCG cause as they know it will provide another point of view on how their machines will perform in more realistic computing environments. They want their hardware to shine in this light and know that the community, which has been vocal about the real world performance limitations of LINPACK numbers, will look to these companion results for a more thorough review of actual system performance.
Of course, this makes one consider just how one might “game the system” of this benchmark in creative ways. Heroux claims that they are onto certain tricks and those are explicitly stated as banned. Again, there are other subtle things that they’re exploring to keep the benchmark open that will be ruled in or out as HPCG progresses.
In addition to offering a more comprehensive view of real-world application performance, Heroux says that ultimately, another benefit of the benchmark is that they’re gaining a better understanding from vendors (and early HPCG runners) about some system deficiencies that will be harder to hide. From network performance to gaps in sparse libraries from vendors, HPCG will continue to highlight the need for diverse improvements.
In sum, as Dongarra said, “after six months we want to make sure people understand what the benchmark is trying to do, let them know it requires a certain amount of effort to implement and see the benefits–and help them understand that we have made changes as a result of input coming from the community. We want to encourage further input from the community so it can represent a true community effort that better reflects the kinds of things we do on high end systems.”