Because It’s There?
I recently got a chance to talk with Ed Turkel, manager of HP’s Product and Technology Group for the HPC Division. We mainly discussed HP’s new multicore optimization program, but I was also interested in what he had to say about the company’s aspirations in high-end supercomputing. Although the company essentially matches IBM in HPC revenue, HP doesn’t have a Blue Gene type of solution for extreme supercomputing. In the latest Top500 list, HP actually had more entries (203) than any other vendor, but didn’t have a single system in the top 50.
That particular statistic is about to change. HP recently revealed that a 182-teraflop system has been purchased by an undisclosed Swedish government agency. The Swedish machine is a cluster comprised of 2,148 dual-processor, ProLiant C-class Xeon-based blades. Turkel says HP is hoping to get that system on the November Top500 list, where it would almost certainly break into the top 10. Another Swedish system, this one a 60-teraflop machine for the country’s National Supercomputer Centre (NSC), should also be deployed in time for the November rankings.
According to Turkel, HP is interested in the high end of the HPC market, but only where they can leverage their enterprise HPC offerings into something bigger. “We perhaps haven’t been as willing as some of our competitors to — if you will — give away systems,” explained Turkel. “But it’s not for lack of interest in the high end. Just the opposite. We’re very interested in deploying some large systems.”
He says they have no intention of developing a proprietary architecture, like Blue Gene, for a high-end offering. But when I asked him if they were going to come out with a distinct product for high-end supercomputing, he hedged a bit, leaving the door open to the possibility.
Scaling commodity clusters into the 100-teraflop-plus realm is now feasible thanks to blade technology, multicore processors and InfiniBand interrconnects. Getting to a petaflop is trickier. Sun Microsystems’ recently announced Constellation supercomputer uses a very dense blade design and a special InfiniBand switch to implement a petaflop-capable architecture. Whether commodity-based systems like this can achieve the real-world application performance of the more highly customized Cray and IBM supers remains to be seen.
But why would anyone want to chase the high end of the supercomputing market anyway? Analysts and vendors both agree that the market is small, essentially stagnant, and is dependent on the buying behavior of a limited set of customers — mostly government organizations. With the exception of Cray, companies that have focused exclusively on this market sector have either failed or were bought out. Cray itself has been swallowed and regurgitated a number of times.
Turkel said that HP’s interest in the high end of the supercomputing market is driven by the company’s strategy of using HPC as a technology incubator. The hope is that the kind of research that brought the world clusters may also come up with something else as widely applicable to the larger IT community. Potentially, that’s worth a lot.
This is the same rationale Sun used when announcing its Constellation supercomputer last month. Talking about the new offering on his blog, Sun CEO Jonathan Schwartz admitted that the high end of supercomputing is “small, esoteric, and has very small profit margins.” But, he explained, that’s not the point:
The academic supercomputing community (there’s that word again) sets the pace for enterprise computing across the world — which has grabbed on to HPC for an array of real world challenges, from virus, disease, and drug discovery, to customer purchase pattern analytics, capital markets trading, energy discovery, dynamic resource management — you name it, it’s one of the fastest growing segments in the marketplace. Proving that what starts in academia, ends up on main street.
But it works both ways. A lot of mainstream computing technology feeds back into supercomputing. And that tends to be the more typical direction of technology flow. Linux, x86 processors and Ethernet are all commodity technologies that were adopted by HPC. Even InfiniBand, which is now making its way from HPC into the enterprise, was originally developed as a general-purpose interconnect. FPGAs and GPUs may be the next examples of commodity technology that moves up the food chain.
And as for those “real world challenges” that Schwartz talks about: most of those applications run on capacity HPC clusters, not capability-class systems.
So why do these companies find the need to play at the far edge of the supercomputing market? Maybe for the same reason people climb Everest — because it’s there. Trying to explain the business case for high-end supercomputing may keep investors calm, but in truth, the motivation to feed profits doesn’t explain all vendor behavior. Sometimes all it takes is a single individual with some lofty goals and a need to succeed.
Seymor Cray wanted to build the fastest computers in the world for the joy of it. When he started Control Data Corporation in 1957, his interest was in building big scientific computers, not in making boatloads of money.
And if you’ve already made a boatload, like billionare Andy Bechtolsheim, Sun Microsystems’ chief architect, your motivations may lie somewhere beyond capitalism. Bechtolsheim is busy pushing Sun to the rarified heights of supercomputing with the aforementioned Constellation system. While that product may not make Sun rich, it makes them a player in the eyes of the supercomputing community. And if that’s enough of an incentive for the folks at HP, we may yet see another company join the petaflop club.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.