Visit additional Tabor Communication Publications
November 26, 2012
Apparently, the US Department of Energy (DOE) is revising its timetable for deploying its first exaflops-capable supercomputers. According to William Harrod, Research Division Director of the DOE's Office of Science Advanced Scientific Computing Research (ASCR) program, the agency is now looking at the 2020 to 2022 to reach get its first exascale machines up and running. That effectively means the US is delaying its plans for this next-generation technology by two to four years.
Harrod outlined the impact of the delay at the Supercomputing Conference (SC12) last week in Salt Lake City, Utah. In an article posted today in Computerworld, Harrod described the slippage thusly: "When we started this, [the timetable was] 2018; now it's become 2020 but really it is 2022."
The DOE is in the process of writing up a proposal, known as the Exascale Computing Initiative (ECI), which is expected to be presented to Congress in February of next year. Of course, there's no guarantee that the feds will actually act on the proposal in a way that meets the agency's needs.
According to the Computerworld report, the effort is expected to cost in the neighborhood of a billion dollars over the next several years. Given the failure of the Obama White House and Congress to come to terms on budgets over the previous four years, that doesn't bode well. Even at best, funding for the work won't be put in place until October 2013, as part of the fiscal 2014 budget.
Although the budget stalemate that has gripped Washington for the last four years has not helped, a more fundamental problem is that it's been difficult to make the case for exascale systems. Despite Obama's 2011 State of the Union address invoking the Russian Sputnik challenge as a model for lighting a fire under US R&D, there is little public outcry for more federal spending in technology. Scientists insist that exascale machines will enable advancements in an array of fields – biology, energy, physics, material science, national security, and climate research; but such talk has not captured the public imagination to the degree that would force policymakers to act.
Unfortunately, to develop such supercomputers by the end of the decade requires actions now. While the hardware may indeed become available by 2018 – Intel, Cray and others have stated their intentions to supply such hardware in that timeframe – the software models for exascale computing haven't been developed yet and will require a long lead time.
China is also working on these systems and intends to field an exaflop-capable machine around the same time – perhaps using domestically produced technology. Governments in Japan and Europe have plans to field exascale machines around the end of the decade as well Those nations have the same daunting challenges as the US, but if the Americans dawdle, it's not inconceivable that the first exaflop machine will be in Europe or Asia.
In fact, if the TOP500 trends are to be believed, a supercomputer that is able to execute a Linpack exaflop will appear somewhere in world by 2019. Whether that machine becomes a platform for exascale computing or just a container for a collection of petascale and terascale applications is another matter.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and even national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.