Shaking the HPC In-Memory Stack
Many who have been in HPC for a number of years will remember GridGain, the in-memory computing company that has found success at a number of commercial and academic high performance computing sites since its official launch in 2010—an effort backed with an initial $2.5 million investment, followed by another boost last year with a Series B round of an additional $10 million.
While the company still has firm roots in HPC and is vocal about the advantages of in-memory approaches for scientific applications, it’s using some of its VC funds to look beyond its supercomputing roots with some new hooks inside the platform that has broader enterprise appeal. This is one that takes critical lessons from HPC but ties it together with Java string, a newly-announced open source version for foot-in-the-door movement, and a platform approach to in-memory that teases out high performance computing, streaming data processing, and Hadoop acceleration with claws in several cloud capabilities to boot.
The company’s co-founder, Nikita Ivanov, who has spent a great many of his 20-plus years addressing distributed computing problems, openly admits that one of the reasons they might not have a high number of users in HPC is because of their Java-based approach. However, he argues that this is not a shortcoming on the part of the technology he developed and evolved starting in 2005, it’s a problem with HPC’s inability to get out from under the spell of its love affair with C-based approaches.
Perhaps calling it a “love affair” isn’t fair—it’s more of a relationship borne from legacy roots. As he explains, this is an issue that’s based in the Fortran era. This has spun into multiple libraries, specifically revolving around HPC, that haven’t been able to develop to look more like what he says he’s seeing everywhere else. He says Java is flexible, ridiculously easier to manage, especially when compared to the MPI stacks he dealt with in this past, and more robust. “HPC has decades-old technology that has been refurbished,” says Ivanov, “but even for pure HPC workloads, nothing can touch what we have here based on Java.”
What’s interesting about that statement is that he says in their talks with potential customers across a broad swath—“traditional HPC” and start-up commercial enterprises alike—there is a trending away from workloads that “just HPC” or “just Hadoop.” Rather, systems are juggling a number of different workloads, all of which have been separated with individual software to handle parts of the disparate tasks—and all of which might be able to benefit from in-memory computing.
This is where GridGain’s approach starts to make real sense. Instead of just catering to HPC, as they did in the past, they can offer tools for those workloads that mesh, under the same platform—same piece of overarching software—with their Hadoop accelerator, and/or their streaming real-time data analysis platform. Once you get past the Java roots, Ivanov says, users are capable of trying on different models and approaches in a way that makes sense across the stack. Oftentimes, he says, they’re finding that users don’t have one pure purpose, but seem to be requiring more than one of their in-memory hooks (for instance, streaming and Hadoop).
“We’re putting ourselves in the full in-memory platform space since HPC is only a limited view of what we do. We’re not just focused on compute-intensive applications. For data-intensive applications, we’re also focused on in-memory data grids, Hadoop accelerators, and in-memory streaming. Thus if you have small amounts of data but compute-intensive applications, as seen with Monte Carlo applications in financial services, this is a fit—but it’s also a fit when dealing with a data warehouse using our in-memory data grid approach.” Ivanov says they’re not carving out HPC, rather, they’re expanding the definition to be more in line with the wider range of what they do.
GridGain’s primary growth vertical is financial services, which the company says is a perfect fit in terms of their need for real-time in-memory approaches to handle risk. According to the company’s CEO, Abe Kleinfeld, their work with one of the top banks in Russia, Sperbank, demonstrates how their high performance computing roots have fed into some wider appeal. The Russian financial institution, which marks GridGain’s largest use case, is using the company to handle its real-time risk analysis across its global trade portfolio. When the first did a test of GridGain’s capabilities, Kleinfeld says they were able to manage over a billion transactions per second across 10 standard Dell blades sporting 96 GB of memory per blade. In other words, as he noted, “they were able to get a billion transactions per second for under $25,000 of hardware.”
Kleinfeld says they will likely reach a $3.5 million target this year, powered by taking their platform approach open source, presumably into more shops and deeper into some potential OEM and other partnerships. “The time is ripe for in-memory computing because the economics of it have never been better—it makes almost no sense to do disk-based computing. Besides,” he said, “the world is looking to build on open standards, why would anyone want to be locked into a box or buy several different products that do different things when they get an open platform for a range of in-memory computing?”
As one might imagine, the open source version isn’t a community service. While all of the legs of GridGain’s in-memory platform are available for a try-on, the real robust features for putting this into mission-critical production are behind a wall.