Visit additional Tabor Communication Publications
December 07, 2007
IBRIX, one of the smaller players in the HPC storage market, is preparing to enter its next growth phase. With a revamped executive team, lead by CEO Bernard Gilbert, the company is looking to expand its market footprint.
Since the introduction of its Fusion parallel file system software in 2005, IBRIX has managed to snag some high profile companies with their software offerings. Big name customers like Pixar Animation Studios, Walt Disney Feature Animation, NCSA, AOL, Facebook, Monsanto, Caterpillar and others are an indication that there is a growing interest in using parallel file systems in both HPC and ultra-scale big data applications.
IBRIX Fusion is an integrated suite of software that enables scalable file serving. The IBRIX high-throughput parallel file system (FusionFS) is designed for mainstream HPC platforms, i.e. Linux clusters with commodity storage. Unlike Panasas and Isilon, Fusion is a software-only solution, suitable across a range of storage hardware platforms. This enables data residing on multiple vendors' storage to be aggregated under a single file system namespace, with capacities in the petabytes realm. Also unlike other parallel storage solutions, IBRIX parallelizes both the file metadata and the file data itself. There is no centralized metadata server, which helps make scaling storage capacity more transparent for the user.
The solution is suitable for both compute-intensive applications, like animation rendering, data mining, and genomics sequencing, as well as bulk storage applications, such as online storage services, email, video sharing, and digital music services. This points to the company's dual focus in high performance computing and large-scale, Web-based applications.
Gilbert, a former Sun Microsystems exec, who came on board in June, was specifically brought in to drive the company through the next stage of growth. That means expanding their market penetration beyond their traditional HPC customers in research/education, media/entertainment, and oil and gas. The new areas Gilbert is going after are in the financial services sector and Web 2.0-based media storage and distribution. IBRIX intends to follow the same strategy it found useful in other markets: sign up a big name company (like Pixar in the media/entertainment sector) to attract other companies in the same vertical.
The process has already begun.
Earlier this year, AOL became a client when the Web giant's multi-petabyte storage environment became too unwieldy to manage with traditional NAS solutions. AOL needed to support a rapidly growing storage capacity with 24/7 uptime, all hosted on inexpensive storage hardware. Fusion provided that through its high availability software features, allowing AOL to upgrade file servers on the fly or even to take storage off-line for maintenance. AOL's database, spread out over four applications, is currently eight petabytes and growing.
Over the next few years, the company is hoping to take advantage of this demand for ultra-scale bulk storage management by other online digital media companies. Gilbert believes that businesses like AOL, which need to manage rapidly growing unstructured content on commodity storage, are perfectly suited to Fusion file systems. Internet companies offering software-as-a-service products are also candidates for high performance parallel file system solutions. In all cases, these online providers require the same high level of I/O throughput common to many applications in traditional high-end technical computing.
Apparently, the company is also on the verge of bringing a large financial firm into the fold. Gilbert says they expect to be announcing a big win sometime within the next few weeks. He sees financial analytics as a big opportunity for them, since the storage topology is identical to many HPC research applications, that is, a large number of compute nodes accessing a relatively small amount of data. In these cases, I/O throughput can easily become the bottleneck, resulting in compute resources wasting precious time waiting for data. On Wall Street, bottlenecks like this are intolerable when you're trying to execute time critical trading.
The big challenge for IBRIX in expanding its market reach is getting its foot in the door. To date, the company has been able to use some key partnerships to raise its visibility. Both Dell and EMC are strategic partners that have helped bring IBRIX into some major deals. HP and IBM are reseller partners with a big footprint in the HPC server market. Even so, IBRIX is looking to expand its partnerships to get into more accounts.
Gilbert says they're looking to achieve a 1.5X year-over-year growth. A glance at the IBRIX Web site shows they're currently looking to hire nine additional people, reflecting the typical profile of a small company in growth mode. With the immediate focus of expanding into its targeted verticals and adding some top tier customers to its portfolio, IBRIX should have its hands full in 2008.
"I'm really encouraged by the technology we have," says Gilbert. "We are growing into some very interesting verticals. I think the potential there is [very large]. But I want to make sure we go in there with our eyes wide open."
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.