One of the most heated debates in supercomputing circles over the last several years has been around the continued validity of the current basis of the Top 500 ranking of the fastest systems in the world, the high performance LINPACK (HPL) benchmark.
Out of these many criticisms, one of the founders of the original LINPACK benchmark, Dr. Jack Dongarra (Oak Ridge National Lab, University of Tennessee) and a small team have worked diligently to address the one pain point many centers felt over HPL—it didn’t adequately measure the potential to express real application performance consistent with the big numbers around peak and theoretical peak figures. While we’ve written about the value of both LINPACK and the new benchmark here and here, the fact is, the new HPCG effort is finally getting some legs—and the newest results are in.
As we are careful to note each time HPCG comes up, this new measurement is not meant to replace the Top 500 rankings. Rather, the two are complementary, with each serving as a “bookend” wherein actual application performance can be found somewhere between. With that said, as a counterbalance to the performance numbers that everyone, from vendors to the wider world, recognize as the gold standard for supercomputing, it’s time has come–well, almost. There’s still work to be done, says Dongarra, even if they’re starting to see momentum at big system sites.
As you can see in the chart below, the results for the 25 total submissions for this November’s list are less dramatic performance wise, however, as you can read here, this benchmark is committed to understanding how large systems handle the strain of actual application performance. This might mean less “sexy” numbers to share with the world (at least on their own), but for those with top systems who want to prove their massive machines aren’t just a one-trick show, it’s critical to have some balance. And of course, essential to the way that future machines are evaluated and procured.
Before we get ahead of ourselves, it’s worth sharing the results of the submissions, which as you can see (hopefully—click for a larger image) a bit different than the Top 500. To keep things readable, take a look here for this year’s Top 500 rankings. The big changes at the top between the two lists will stand out to anyone familiar. Most notably, the K Computer in Japan made a jump from #4 on the Top 500 to #2 with HPCG and the Titan system at Oak Ridge dropped to #3.
One thing to keep in mind here is that again, there are only 25 entries for this benchmark run. Dongarra expects this to grow, especially since the results have doubled since last November. However, it’s a slow climb for now, especially as centers and vendors alike get a handle on the optimization process for their machines. His team will be publishing a report in the coming months that compiles some best practices and notes about how the top entrants, as well as companies like NVIDIA and Intel, implemented their different approaches.
On that note, take a look at the results and notice the red markings. Those mean the presence of GPUs or coprocessors. If you start making a few quick connections, you’ll see that the GPU accelerated systems that tend to shine on LINPACK really don’t pull the same power they do on this real-world application-oriented benchmark. As Dongarra said, ““It’s not unlike HPL where GPUs and coprocessors have a lower achievable percent of peak because it’s harder to extract performance from these. It’s not just programming ease either, it’s about the interconnect. When that problems goes away it will change the game dramatically.”
Of course, this is not a message of doom and gloom when it comes to GPUs and coprocessors on these large machines—at least in the future. Since a great deal of the interconnect problem with coprocessors and GPUs will be a thing of the past once data never has to leave its chip home. In the meantime, it’s important for this benchmark to get traction—as well as the codes and systems—for this new generation of processors that will kick off the newest Knight’s family processors and work by the OpenPower Foundation and NVIDIA to nix the hop and keep the movement on the die.
But enough about what’s not working to rank high on this benchmark—there are some rather odd exceptions to the rule that applies to most of the Top 500 other than the accelerator strike. For instance, check out one of the most telling numbers on the benchmark, which is the obtained percentage of peak performance on the far right.
At first, these numbers overall might seem abysmal until one realizes how standard this is for these systems when it comes to theoretical peak numbers. While we could easily pick in another article here on how this is even further reason for some centers’ refusal to even run LINPACK in the future, this percentage speaks for itself. Still, when you scan the list, a couple of things will stand out, both of which are architectural in nature.
First, the K Computer is a standout, with just over 4% of this peak slice. It’s not what one would call a traditional architecture, given that it’s based on the native processor and environment, but the performance on actual applications, which was always the story behind that processor, is rather remarkable. However, not quite as remarkable as a…vector architecture?
It would appear so. Take a look near the end of the list at the NEC machine at Tohoku, which gets over 10% of the peak performance pie. While Dongarra says this doesn’t signal a resurgence of Cray-style vector machines flooding into the fold, what it does show is that it is still valid, and remarkable in its balance. The performance numbers on either list are worth noting due to the fact that it’s hitting those with just a little over 2,000 cores.
Other noteworthy systems in terms of their percent of obtained peak go outside the box architecture wise. For instance, the Edison machine is also hitting decent numbers with 3.1%, showing the path of BlueGene and other architectures, as with the K Computer, as working well for this measurement. While one should keep in mind this is just based on 25 systems, take a look at the architectural breakdown. It’s not the standard GPU/coprocessor paradigm topping things out necessarily–quite the opposite. In fact, the much easier to program option–the CPU only approach–does quite well, even if the Top 500/LINPACK kin deliver a more powerful numbers punch. And the custom architectural bent tells a story about what seems to work well performance for actual applications–at least in terms of how this benchmark generalizes them.
At the end of the day it’s all about achieving a balanced architecture. When that is the goal, this companion benchmark can reward the efforts toward these real machines for real science. That, and naturally, a great deal of optimization work. It is taking many manhours to optimize for HPCG, a limiting factor in how many centers are taking on the task. However, with the DoE backing Dongarra and team’s efforts on the more real-world measurement, one can expect others to follow suit with the first twenty five. The publication of the team’s observations along with detailed stories and technical processes behind optimization will likely help.
More details about the benchmark and a way to keep tabs on the new publications that are coming around optimization efforts can be found here: http://www.hpcg-benchmark.org/