Efforts to field the first exascale supercomputer are currently in play by such nations as China, Japan, the US, and the EU. Not only would this achievement provide a powerful science and industry tool for the winning nation, there’s a substantial symbolic victory hanging in the balance. Over at Intel’s IT Peer Network blog, Victor Na, an architecture manager with Intel, writes about this global race and offers up his perspective on who is “competing ahead of the exascale pack.”
In terms of LINPACK performance, China is clearly outpacing the competition. Its Tianhe-2 supercomputer is nearly twice as fast its closest rival, Titan, installed at Oak Ridge National Laboratory. This is just one point in Na’s contention that Asia is currently leading the race to exascale. He also points out that the APAC region claims more than one-quarter of the world’s top 500 systems. HPC revenue is expected to reach $14 billion by 2016 according to IDC figures, and Na asserts that Asia is well-placed to nab a significant market share.
Considering the first exascale machine is not expected for another 7-9 years, the outcome of this race is by no means assured. Many countries and economic blocks have stepped up investments. The US, which has been slow to fund exascale development, currently has a billion-dollar proposal awaiting Congressional action. In light of this potential competition, Na addresses some of the shortcomings that Asia would need to address to better its odds of being first to break the next big speed barrier.
His main point is the necessity for a detailed roadmap with clear objectives.
Na writes: “First and foremost, Asian developers must map a true path to exascale success, which I’m sure you can imagines is no easy feat. How can developers anticipate future needs, build platforms that are able to integrate technologies which in some cases may not even have been developed yet, and account for market changes? It seems an impossible task, but if Asia wants to stay at the cutting edge it needs to be able to answer these questions and many more.”
It’s also important to understand the true depth of the problem. Getting to exascale will require technological breakthroughs in several key areas.
“An ability to increase processor performance is critical,” writes Na. “which means developers must constantly look at ways to enhance memory technology, interconnect and integrate new functions into the processor, reduce power consumption, identify innovative cooling techniques, and identify new technologies delivering increased flexibility to software developers.”
As for Intel’s part, the chipmaker is continuing its commitment to Moore’s Law, according to Na, by working to advance memory and processor capabilities with an eye toward alleviating the memory wall. An upcoming version of the Intel Phi processor will offer on package memory, and the the company’s research division is investigating new technologies like memory stacked on memory.
Innovative fabric solutions are also on Intel’s roadmap. Na writes: “creating a single processor with integrated fabric controller results in fewer chips and therefore fewer chip crossings. In turn this delivers lower cost and power consumption, as well as higher performance and density, which are all critical hurdles that must be overcome on the journey to exascale success.”
Power is perhaps the biggest obstacle – one that does not yet have a clear solution. 1 MW a year costs about $1 million. Built on today’s technologies, an exascale supercomputer would require at least 500 MW of power to run the site – that’s a half-billion dollars a year in power costs. DARPA has set a goal to field exascale systems in the 20 MW neighborhood, but other HPC experts claim 50-100 MW ranges are more realistic. Na reports that Intel and European researchers have established three European labs that are focused on designing simulations that will be energy-efficient at exascale.
One of the constant memes in discussions of China’s supercomputing ambitions has been that nation’s lack of application software expertise. Na steers clear of this topic, except to say that Intel and its partners are working to develop “effective parallel programming models where we work in close collaboration with academics and researchers to tackle the various issues affecting HPC.”
Offering an alternate view to Na’s position is Jack Dongarra, a professor of computer science at the University of Tennessee and the inventor of the LINPACK benchmark which the TOP500 list is based on. Dongarra recently told Computerworld that he disagrees with the notion that China has a head start.
“They are not ahead in terms of software, they are not ahead in terms of applications,” said Dongarra. But China does have the political will to invest in next-generation HPC, “where we haven’t seen that same level in the U.S. at this point.”