Visit additional Tabor Communication Publications
March 10, 2011
On Wednesday, NetApp announced an agreement to buy the Engenio storage business from LSI. Engenio makes high bandwidth storage gear for the HPC and multimedia markets, two areas that NetApp has largely been excluded from with their current enterprise-focused portfolio. The price tag for the deal is $480 million, making it the largest acquisition in NetApp's 19-year history.
During an analyst conference call, NetApp CEO Tom Georgens explained the deal would enable his company to add a total addressable market of $2 billion per year ($1 billion each for HPC and video storage/distribution/surveillance), a number he expected to grow to $5 billion by 2014. On the HPC side, he mentioned immediate opportunities in the federal government, entertainment, and oil & gas.
Georgens characterized the acquisition as a clean fit for NetApp, since there is little overlap between the two product lines. Today the NetApp portfolio is geared toward the enterprise. The company's Fabric-Attached Storage (FAS) running NetApp's proprietary Data ONTAP OS allows users to consolidate disparate file systems (NFS and CIFS) under a single platform. By contrast, Engenio offers stripped-down storage arrays optimized for high bandwidth and scalable capacity, which makes it a particularly apt fit for performance-challenged environments in technical computing and multimedia.
Although there's little conflict between the storage lines, there's not much synergy either -- at least from the technology point of view. There are no plans (or reasons) to port ONTAP to Engenio gear, and nothing on the roadmap to build a more full-featured, integrated software-hardware offering. For HPC use, most Engenio storage ends up running parallel file systems like Lustre or GPFS (and to a lesser extent Panasas' PanFS or BlueArc's SiliconFS), supplied by OEMs.
Rather the synergy comes on the sales side. Although not in the HPC space, NetApp has accounts with many customers that run performance-hungry workloads, including national labs, EDA firms, banks, entertainment companies, and so on. Up until now they could only provide storage for general data management, leaving the HPC money on the table for someone else. But through direct sales and channel parters, NetApp has a much wider reach that LSI. That regime relied heavily on OEM partners like IBM, Cray, Panasas, BlueArc, SGI, Teradata and T-Platforms to sell Engenio gear into the HPC market.
How the OEM-centric sales model plays out under NetApp remains to be seen. In the conference call, Georgens extolled the virtues of direct sales and channel partners, but realized there's going to be some sort of balance to be struck. For the time being, he expects to keep a large share of the OEM business, but "not every single dollar that we have today."
Certainly in HPC, storage infrastructure tends to be rather application-specific, so ends up getting sold by server and storage OEMs as part of a system deployment (as opposed to a centralized data repository). With no plans to offer a more turnkey Engenio-based storage product, I'm not sure how that OEM-heavy sales model changes significantly. In any case, NetApp is not going to dissolve partnerships on its own if it can sell products through Engenio's existing OEM buddies. As Georgens said, those deals are like "found money."
The complication here is that some Engenio partners like Panasas, BlueArc and RAID Inc. are storage OEMs that compete more generally with NetApp. Keeping those relationships is still possible -- lots of companies compete in one market and cooperate in another -- but some of these vendors may end up looking elsewhere for their storage arrays. Even some of the server OEMs with independent storage lines may decide that NetApp is too strange a bedfellow to stick with long term.
From NetApp's point of view, though, it looks like all upside. With the additional Engenio revenue stream, they expect to tack on another $750 million in sales for FY2012, with that number increasing in concert with future market growth in HPC and multimedia. And with their own global sales force and partners, they think they can generate a good deal more revenue than would have been possible under LSI. "We can exploit this technology in a way the seller could not," said Georgens.
Posted by Michael Feldman - March 10, 2011 @ 4:52 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.