Last month, the President’s Council of Advisors on Science and Technology (PCAST) — 20 of the nation’s leading scientists and engineers selected by the President — released a report, entitled “Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology.” The council argues that networking and information technology is a key enabler of economic competitiveness, national security and quality of life, and therefore should be appropriately funded. Don’t let the humdrum summary fool you, there are revolutionary ideas afoot in this report, and William Gropp, professor of computer science at the University of Illinois, provides a rundown on those relevant to high performance computing. Gropp has a pretty good understanding of the material since he was also part of the team that authored the report.
One of the key claims in the report is that the TOP500 list alone is not a significant indicator of HPC prowess.
While the HPC community has long known that no single benchmark adequately captures the usefulness of a system, the PCAST report explicitly calls for a greater focus on what I’ll call sustained performance: the ability to compute effectively on a wide range of problems:
“But the goal of our investment in HPC should be to solve computational problems that address our current national priorities,”
Addressing this is becoming critical, because developing systems based solely to rank at the top of the Top500 list will not provide the computational tools needed for productive science and engineering research.
Gropp asserts that the business as usual approach to high-end computing will no longer be effective, and that for HPC to continue to advance, a dramatic revamping will be required in all parts of the ecosystem: the hardware, software and algorithms. If this overhaul fails to happen, Gropp opines that the end of Moore’s Law and the relatively-painless progress that goes with it may really be at hand.
To avoid this fate, the report calls for “substantial and sustained” investment in a broad range of basic research for HPC, specifically:
“To lay the groundwork for such systems, we will need to undertake a substantial and sustained program of fundamental research on hardware, architectures, algorithms and software with the potential for enabling game-changing advances in high-performance computing.”
Gropp concludes his analysis with a sobering glimpse into the future of HPC:
Without a sustained investment in basic research into HPC, the historic increase in performance of HPC systems will slow down and eventually end. With such an investment, HPC will continue to provide scientists and engineers with the ability to solve the myriad of challenges that we face.
It’s easy to dismiss Gropp’s prediction as doom-and-gloom rhetoric understandably intended to galvanize resources but in a way I think he’s right. I don’t think anyone wants to see HPC’s demise, but the likely scenario is that we will carry on doing business as usual, making incremental changes and tradeoffs and avoiding the really hard challenges until absolutely forced to do otherwise. I don’t think that we’ll see really big changes, unless we hit the rock bottom of stalled performance. Or unless HPC experiences a game-changing breakthrough that recasts the trajectory of its progress. These types of scientific leaps can’t be predicted, but increased support at the federal level increases their likelihood.