At an HPC meetup event in San Francisco on Feb 10, Berkeley Lab Deputy Director Horst Simon makes the case that Moore’s law and parallelism can no longer be counted on to provide the exponential growth that has been driving high-performance computing for six decades.
If indeed Moore’s law is coming to an end, there will be a need for new architectures and new technologies, he says, citing examples from the post-CMOS space, or even non-von Neumann options like quantum computing and neuromorphic (brain-like) computing.
Horst gives measurable evidence of his claim that Moore’s law is running out of steam using TOP500 metrics as well as other data.
Looking at a slide of projected performance development using TOP500 data, it may appear obvious that an exaflops system is on track for 2020, says Horst, but that would be a mistake.
“Even if you don’t know anything about high-performance computing at all, you should be very much concerned about [making these assumptions],” he adds, “because what you are doing here is extrapolations on a semi-log scale. And whenever you’re dealing with exponential growing data, very small perturbations in the beginning can give you a big variation in the end.”
Horst goes on to identify two such perturbations. By zeroing in on the graph, especially the line representing the sum of total list flops over time, it can be seen that in June 2008 something happened to cause a leveling out of the slope of this extrapolation. A similar break point also appears in June 2013.
This leads Horst to the conclusion that this five-year span marks a turning point where the growth attributed to Moore’s law and parallelism are no longer there. It’s a case that is supported further by the lack of turnover in the top ten machines, with this grouping remaining virtually unchanged for two years.
Even the CORAL announcement, the joint Collaboration of Oak Ridge, Argonne, and Lawrence Livermore, which is noteworthy for funding three 150-petaflops systems can be seen as a marker of slow-down. Horst says these machines would have had to be implemented already to put the US on track for a 2020 exascale timeline, yet they are still two-to-three years away.
Horst goes on to address some of the possible reasons for the stagnation, including a lack of investment stemming from the worldwide recession, and a lack of engagement by key vendors. There are also the steep technical challenges associated with exascale, such as overcoming data bottlenecks and power constraints. But in the end, Horst maintains that the performance decline is primarily due to the limits of Moore’s law.
It’s not all doom and gloom, however, as HPC is currently thriving around the world as a driver of innovation. High-level supercomputing is no longer the purview of one or two nations and the first to field an exascale system will be the one that makes the most targeted funding commitment, if they act soon.
As Horst puts it: “the only thing standing between us and an exascale machine is a lot of money – billion of dollars of investment and maybe a power bill of $50-100 million a year.”