Thanks to the globalization of technology, and more generally, economic opportunity, the pursuit of exascale computing is taking place on an much more level playing field, internationally, than was the case for petascale computing. As a result, it is not a foregone conclusion that the US will lead the way. In a Computerworld interview with Argonne National Lab’s Peter Beckman, he talks about the race to exascale computing and about some of the challenges that entails.
According to Beckman, the DOE is going to deliver a report to Congress on February 10 that spells out a plan to develop exascale computing capability in the 2019-2020 timeframe. The more narrow goal is to field an exaflop supercomputer that runs inside a 20MW power budget. From Beckman’s perspective (and many others in the HPC community), the 20MW is the sticking point. He says if that was relaxed to 40MW or 50MW, that would be a good deal easier.
The article reports that 22 as yet unnamed vendors have responded to the DOE request for information (RFI) to develop an exascale system in that timeframe, with the idea of receiving significant government support to meet the deadline. Besides just building the hardware, there are also the challenges of developing a new software stack and dealing with the data issues. From Beckman point of view, these are the more serious challenges. On the data challenge, he says:
If we imagine that we have a machine that is an exascale, exaflop machine, generating petabytes and petabytes of data, it becomes its own, in some sense, computation problem. We can’t solve the bandwidth storage problem by just buying more disks. A multi-level plan is what will have to evolve, including NVRAM and even novel technologies such as phase change memory . But there has to be a comprehensive data solution that includes analysis. It can’t be, ‘Oh, we just need to be able to store the data.’ We need to look up the architecture necessary to analyze the data.
Whether the US Congress will fund any forthcoming DOE plan is still up in the air. And given the fiscally conservative political climate in the country right now, it’s certainly within the realm of possibility that the feds will pass on this. Undoubtedly, the exascale proponents will point out the utility of the technology to help solve pressing societal issues — climate change, national security, energy independence, industrial competitiveness, and so on — and the relatively small investment that will be required to fund the effort.
Given that Europe and China are likely to devote considerable resources to develop their own exascale computing capability, the US should feel compelled to at least keep pace. The Chinese, in particular, have stated their desire to be the first in fielding an exaflop machine before the end of the decade, and have recently demonstrated they are willing to achieve this with indigenous technology. Although China is behind both Europe and the US in HPC infrastructure and expertise today, they are quickly catching up. According to IDC, in a few years China’s investment will be on par with that of Europe’s. And in the next five years, the Asian nation will build 17 petascale supercomputing centers.
But if the US decides to fund exascale commensurate with its economic prowess, and use its home field advantage with regard to HPC vendors and expertise, there is little doubt the country could lead the world in this area. It just needs the political will to do so.