Visit additional Tabor Communication Publications
November 22, 2011
Thanks to the globalization of technology, and more generally, economic opportunity, the pursuit of exascale computing is taking place on an much more level playing field, internationally, than was the case for petascale computing. As a result, it is not a foregone conclusion that the US will lead the way. In a Computerworld interview with Argonne National Lab's Peter Beckman, he talks about the race to exascale computing and about some of the challenges that entails.
According to Beckman, the DOE is going to deliver a report to Congress on February 10 that spells out a plan to develop exascale computing capability in the 2019-2020 timeframe. The more narrow goal is to field an exaflop supercomputer that runs inside a 20MW power budget. From Beckman's perspective (and many others in the HPC community), the 20MW is the sticking point. He says if that was relaxed to 40MW or 50MW, that would be a good deal easier.
The article reports that 22 as yet unnamed vendors have responded to the DOE request for information (RFI) to develop an exascale system in that timeframe, with the idea of receiving significant government support to meet the deadline. Besides just building the hardware, there are also the challenges of developing a new software stack and dealing with the data issues. From Beckman point of view, these are the more serious challenges. On the data challenge, he says:
If we imagine that we have a machine that is an exascale, exaflop machine, generating petabytes and petabytes of data, it becomes its own, in some sense, computation problem. We can't solve the bandwidth storage problem by just buying more disks. A multi-level plan is what will have to evolve, including NVRAM and even novel technologies such as phase change memory . But there has to be a comprehensive data solution that includes analysis. It can't be, 'Oh, we just need to be able to store the data.' We need to look up the architecture necessary to analyze the data.
Whether the US Congress will fund any forthcoming DOE plan is still up in the air. And given the fiscally conservative political climate in the country right now, it's certainly within the realm of possibility that the feds will pass on this. Undoubtedly, the exascale proponents will point out the utility of the technology to help solve pressing societal issues -- climate change, national security, energy independence, industrial competitiveness, and so on -- and the relatively small investment that will be required to fund the effort.
Given that Europe and China are likely to devote considerable resources to develop their own exascale computing capability, the US should feel compelled to at least keep pace. The Chinese, in particular, have stated their desire to be the first in fielding an exaflop machine before the end of the decade, and have recently demonstrated they are willing to achieve this with indigenous technology. Although China is behind both Europe and the US in HPC infrastructure and expertise today, they are quickly catching up. According to IDC, in a few years China's investment will be on par with that of Europe's. And in the next five years, the Asian nation will build 17 petascale supercomputing centers.
But if the US decides to fund exascale commensurate with its economic prowess, and use its home field advantage with regard to HPC vendors and expertise, there is little doubt the country could lead the world in this area. It just needs the political will to do so.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.