Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

June 20, 2013

Cray Cracks Commercial HPC Code

Nicole Hemsoth

According to Cray CEO, Peter Ungaro, business in the commercial HPC sector is heating up–delivering close to ten percent of the company’s business versus just under the one percent figure from only two years ago. 

Ungaro says this enterprise boost is fed by a new stream of more complex algorithms, which in turn are being stoked by the addition of new data opportunities. As he told us during our sit-down at ISC this week, “Users are moving away from their old models and are attacking more difficult difficult computational problems. Users have applications that doesn’t parallelize easily and require a lot of communication across the machine and that’s exactly what our systems are designed to handle.” 

While he admits that there are a large number of complex algorithms that are being floated to clouds or armies of low-power, standard boxes, there is an estimated 10 to 20 percent of enterprise applications that require sturdier guts and ultra performance. For these, their XC30 “Cascade” system is proving plausible–but to sustain it, there are some other research investments that will lead to longer term viability for their offerings.

To keep pushing the commercial barrier, the company hopping off their hardware habit to take a crack at the softer side of certain HPC problems. At the core of this is their sustained focus on interconnect front. Ungaro says they’re spending twice as much on software in terms of people resources as they are on hardware. “We’ll continue to differentiate our systems,” Ungaro said. “And interconnects are an important part of that. We’re investing there more than everywhere else, even after the Intel transaction.”

He explains that as they look toward next generation interconnects, the bottleneck that needs to be address is getting the interconnect up to the system–the only way to handle that is to get it closer to the processor, but since Cray won’t be getting into that business anytime soon, their real efforts can focus on the software side of solving such problems. 

The goal of investing in the software half is that Cray could take those developments and push them to many different interconnect technologies to add greater flexibility and presumably, more alternatives as the market continues to bifurcate at the high and midrange ends. Having a transferrable technology that lets them revolve around specialty interconnects or boosting Infiniband and Ethernet makes sense, especially as they continue the enterprise HPC drive.

“It’s a natural movement of the technology,” he told us. “A few years ago people would have said that Cray is all about vector processors. Well, we no longer do those and our business now is much stronger now that we don’t. I can see how in the future–say 5 years from now–that people might also say Cray no longer does the hardware ASIC for an interconnect–our value will be much bigger in the marketplace.”

These developments, meshed with some Cray-crafted cluster technology and fed by the Appro buy could deliver some broader appeal–as could their relatively recent focus on big data solutions, which are being bandied about between their YarcData division and their existing customer base.

Ungaro is confident that their Hadoop and graph analytics offerings are pushing performance into a big data market that tends to have its sights set on commodity boxes. Cray announced their own Hadoop solution based on the Intel distro recently, and has added a Hadoop box that is ready to roll with full integration. Further, their Urika graph analytics appliance is finding a home with users in financial services, fraud detection and beyond–although Ungaro was mum on how many they’ve sold since the first five customers were announced last year following the testing phase. 

When asked, “why Hadoop” Ungaro said they’ve tackled the discovery side of big data with graph analytics, but until their Hadoop offering, they’ve not been able to address the search side. The philosophy here, he explains is to take Hadoop and pack it around a cluster that lets users apply supercomputing technology to advanced analytics.

They’ve also extended focus on both big data and supercomputing by adding some Lustre to their storage pitch. The Cray Cluster Connect, which was just announced at ISC this week sets up a compute agnostic storage management stable that open up new possibilities–again, presumably for a broadening base of users that might want to plug in DDN, NetApp or other blocks.

While there aren’t many big supers slated for top ten deployments, Ungaro is thinking big about Cascade as a more widely consumable system. The company just announced that Xeon Phi could be snapped in, which could eventually lead to a new set of commercial HPC use cases in some of the expected high performance computing areas. But combined with an actual hook for all the big data fishes in what seems to be an endless sea, Cray might be pushing open some doors for HPC’s trickle into the mainstream.

Share This