Fusion-io to Follow in New Footsteps
Lately, it’s seemed like there’s been a lot of eulogy-penning for the (more frequent) consolidation stories involving companies that had a foothold in HPC. It was just a couple of weeks ago that the European HPC company, Bull, announced that it would fall under the Atos company cloud—not long before that, IBM’s announcement that its x86 business would be moving over to Lenovo. And this morning, fresh news of another company’s acquisition, although this time even less of a strict HPC play.
In the case of today’s news that SanDisk’s is putting up $1.1 billion for Fusion-io, it’s a bit less of an HPC-centric surprise, although Fusion-io has invested rather significantly toward making its presence in the supercomputing stream felt since its inception in 2006. We’ve watched Fusion-io closely as they secured a Series A round of $19 million, upon which was stacked another $47 million two years later, in addition to highly publicized heavy hitters coming on board, including Steve Wozniak as Chief Scientist and investors Michael Dell, Accel Partners, Samsung, and others. The company has remained well fed from its largest customers, Apple and Facebook, but as their latest financial statements reveal, their business is quite a bit more diverse than the hyperscale giants. And indeed, there has been a rather nice slice of the HPC pie for the company’s ioMemory technology, but not quite enough to keep the company from leaking money for the last five consecutive quarters.
“We have increasingly seen the deployment of flash/SSD as part of HPC and Big Data environments, especially among commercial organizations, as more users find I/O performance to be the critical bottleneck for their application workloads,” says Addison Snell, CEO of Intersect360 Research. “With memory requirements also continuing to rise, SanDisk could see increasing demand across its newly expanded portfolio.”
Fusion-io has had a number of wins in the HPC market over the last several years. In 2011, just after going public, their technology backed big performance boosts for the Protein Data Bank at the University of California, San Diego and partner center, the San Diego Supercomputer Center. In that case, SDSC replaced its hard drives with Fusion-io to cut MySQL database query times from 30 minutes to 3 minutes.
SDSC was one of the early users of the company’s Fusion ioMemory technology, which followed by other large supercomputing site adoptions, including Lawrence Livermore National Lab’s use of a single node server with 12 TB of ioMemory that broke graph size records on the June 2011 Graph500 list. This system, called “Leviathan” was based on a four-socket 40-core Xeon 7500 processor hooked in with 9 Fusion ioDrive Duos and one ioDrive. The lab stored the graph in the direct-attached memory with the ioDrives providing hits to graph’s edges and offering a nice proof point Fusion-io.
The company’s pitch for HPC, which they showcased through the Leviathan example, is that its ioMemory could enhance processing capabilities of grid environments to make each node far higher performing, thus allowing such environments to handle more demanding workloads, consume less memory, and ultimate lead to higher performance. The company has had a strong latency message for the HPC camp, which they said was addressed with ioMemory’s “15-microsecond write latencies to ensure that processors in clustered environments are kept busy actively processing requests instead of managing thousands of contexts and threads.” Their counter to SSDs because each server could bypass the network, thus freeing up resources for I/O bursts.