As we reported last week, IBM backed out of the deal with NCSA to build the 10-petaflop machine on the grounds it was no longer financially feasible. According to a report in the Champaign/Urbana News-Gazette, the National Center for Supercomputing Applications (NCSA) is already looking for a replacement to IBM’s ill-fated machine. John Melchi, who heads the Administration Directorate at NCSA, said computer vendors have been contacting the center to offer their solution.
While Melchi wouldn’t name names, he pointed out that there were originally four proposals submitted to NSF for the system back in 2007. No doubt some or all of those vendors are talking to NCSA again.
Presumably the $300 million price tag for Blue Waters would still apply. The National Science Foundation (NSF) had kicked in $208 million for the project, while the University of Illinois and the state government tacked on an additional $100 million. Given the apparent failure of IBM to squeeze any more money out of parties, the next vendor will probably have to work within the same financial constraints.
According to Berkeley Lab Deputy Director Horst Simon, who is quoted in the News-Gazette article, the NSF has historically low-balled supercomputer projects, with the expectation that the vendors, their partners, or other government entities would make up the difference. There is also a certain “macho aspect” to getting a top-ranked machine on the TOP500 list, he added.
But a lot has changed since 2007, when the Blue Waters deal was originally formulated. The 2008-2009 recession re-focused the attention of HPC vendors on the bottom line, while state governments are reeling from a loss of tax revenues. (Illinois, in fact, is one of the hardest hit states, suffering its worst deficit in history.) In such an environment, prestige takes a back seat to practicality.
The question remains whether the NCSA can find a vendor to come up with a 10-petaflop system for $300 million by the end of 2012. The cheapest way to get to 10 peak petaflops peak is with GPUs, but its not clear if NCSA want to go that route. The real goal of the project is to provide a system that can deliver a sustained petaflop across a range of science and engineering codes. And since GPUs don’t have the same general-purpose breadth of computational capability as CPUs, NCSA might have to reformulate its approach.