Visit additional Tabor Communication Publications
December 07, 2007
IBRIX, one of the smaller players in the HPC storage market, is preparing to enter its next growth phase. With a revamped executive team, lead by CEO Bernard Gilbert, the company is looking to expand its market footprint.
Since the introduction of its Fusion parallel file system software in 2005, IBRIX has managed to snag some high profile companies with their software offerings. Big name customers like Pixar Animation Studios, Walt Disney Feature Animation, NCSA, AOL, Facebook, Monsanto, Caterpillar and others are an indication that there is a growing interest in using parallel file systems in both HPC and ultra-scale big data applications.
IBRIX Fusion is an integrated suite of software that enables scalable file serving. The IBRIX high-throughput parallel file system (FusionFS) is designed for mainstream HPC platforms, i.e. Linux clusters with commodity storage. Unlike Panasas and Isilon, Fusion is a software-only solution, suitable across a range of storage hardware platforms. This enables data residing on multiple vendors' storage to be aggregated under a single file system namespace, with capacities in the petabytes realm. Also unlike other parallel storage solutions, IBRIX parallelizes both the file metadata and the file data itself. There is no centralized metadata server, which helps make scaling storage capacity more transparent for the user.
The solution is suitable for both compute-intensive applications, like animation rendering, data mining, and genomics sequencing, as well as bulk storage applications, such as online storage services, email, video sharing, and digital music services. This points to the company's dual focus in high performance computing and large-scale, Web-based applications.
Gilbert, a former Sun Microsystems exec, who came on board in June, was specifically brought in to drive the company through the next stage of growth. That means expanding their market penetration beyond their traditional HPC customers in research/education, media/entertainment, and oil and gas. The new areas Gilbert is going after are in the financial services sector and Web 2.0-based media storage and distribution. IBRIX intends to follow the same strategy it found useful in other markets: sign up a big name company (like Pixar in the media/entertainment sector) to attract other companies in the same vertical.
The process has already begun.
Earlier this year, AOL became a client when the Web giant's multi-petabyte storage environment became too unwieldy to manage with traditional NAS solutions. AOL needed to support a rapidly growing storage capacity with 24/7 uptime, all hosted on inexpensive storage hardware. Fusion provided that through its high availability software features, allowing AOL to upgrade file servers on the fly or even to take storage off-line for maintenance. AOL's database, spread out over four applications, is currently eight petabytes and growing.
Over the next few years, the company is hoping to take advantage of this demand for ultra-scale bulk storage management by other online digital media companies. Gilbert believes that businesses like AOL, which need to manage rapidly growing unstructured content on commodity storage, are perfectly suited to Fusion file systems. Internet companies offering software-as-a-service products are also candidates for high performance parallel file system solutions. In all cases, these online providers require the same high level of I/O throughput common to many applications in traditional high-end technical computing.
Apparently, the company is also on the verge of bringing a large financial firm into the fold. Gilbert says they expect to be announcing a big win sometime within the next few weeks. He sees financial analytics as a big opportunity for them, since the storage topology is identical to many HPC research applications, that is, a large number of compute nodes accessing a relatively small amount of data. In these cases, I/O throughput can easily become the bottleneck, resulting in compute resources wasting precious time waiting for data. On Wall Street, bottlenecks like this are intolerable when you're trying to execute time critical trading.
The big challenge for IBRIX in expanding its market reach is getting its foot in the door. To date, the company has been able to use some key partnerships to raise its visibility. Both Dell and EMC are strategic partners that have helped bring IBRIX into some major deals. HP and IBM are reseller partners with a big footprint in the HPC server market. Even so, IBRIX is looking to expand its partnerships to get into more accounts.
Gilbert says they're looking to achieve a 1.5X year-over-year growth. A glance at the IBRIX Web site shows they're currently looking to hire nine additional people, reflecting the typical profile of a small company in growth mode. With the immediate focus of expanding into its targeted verticals and adding some top tier customers to its portfolio, IBRIX should have its hands full in 2008.
"I'm really encouraged by the technology we have," says Gilbert. "We are growing into some very interesting verticals. I think the potential there is [very large]. But I want to make sure we go in there with our eyes wide open."
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.