Distributed Data Grids and the Cloud: A Chat With ScaleOut’s Dr. William Bain

By Nicole Hemsoth

October 27, 2010

Distributed data grids, which are also known as distributed caches, store data in memory across a pool of servers (which could include an HPC grid or in a web or ecommerce farm like at Amazon.com) with a distributed cache for holding on to fluid, fast-moving data. This technology makes any company offering it well-positioned to serve a number of verticals, both in the traditional and non-traditional HPC space, including financial services and large-scale ecommerce organizations.

One company that has been particularly visible on the distributed data grid front for both ecommerce and financial services in particular has been ScaleOut Software, an eight-year-old company that has seen massive growth, due most recently to rising interest from financial institutions.

As Dr. William Bain, founder and CEO of ScaleOut noted of the interest from financial services–a veritcal marked by its need for near real-time results, “Distributed data grids have evolved from a basic data cache into a sophisticated analysis platform to track and process massive market volumes. The ability to quickly and efficiently perform complex analyses on historical and real-time data has become vital to top Wall Street firms seeking competitive advantage.”

The company has garnered signficant market share from the financial side of the spectrum but the talk about distributed data grids has been emerging again, in part due to the more widespread adoption of the cloud in this and other areas coupled with the massive explosion in sheer volumes of data generated in real time that needs to be analyzed in near real-time.

One reason why distributed data grids have received so much attention is because with traditional modes of data storage, there are built-in causes for bottlenecks that prevent scalability that make these less attractive options for some. ScaleOut Software’s founder and CEO, William Bain notes that “bringing techniques from parallel computing that have been in the works for two or three decades to this problem” is relieving some of the inherent weaknesses of traditional storage and is optimizing performance due to refinements in how data is stored, accessed and used.

Dr. Bain spent some time speaking with us about distributed data grids and typical use cases recently and put some of the technology in context—while providing a glimpse into how something that’s been around for some time is now receiving an added boost from the cloud.

Let’s put it in this context; imagine you have hundreds of thousands of users accessing a popular site. The site needs to have the data they’re storing and updating rapidly (as would happen with a shopping cart) kept in a scalable store since this is important to keeping their response times fast. Distributed caches have been used in this way for about 7 years and they’re becoming vital now for websites to scale performance.

In the area of financial services this technology allows the analyst the ability to store data that can be easily stored and then ready for analysis. There are several applications that are written for this area that require distributed data grids to achieve the scalable performance they need.

What’s driving this is that the amount of data being analyzed is growing very rapidly and the latency issues involved means you have to have a scalable platform for analyzing data in real time. This is especially the case for large companies that are doing financial analysis; the kinds of applications these people are running include algorithmic trading, stock histories that predict future performance of stock strategy, and so o and those are a perfect fit to a scalable data store.

The key trends we’re seeing that are making this exciting is one, the value of storing data in memory can dramatically improve performance over other approaches such as doing a map reduce-style computation on data based in a database because in-memory storage eliminates the latency issues caused during transfer.

The second important part of this is the cloud. – the cloud is providing a widely-available platform for hosting these applications on a large pool of servers that are only rented for the time that the application is running. There is a confluence of technologies that will drive this technology area to the forefront of attention because of the opportunity it has created that we’ve been waiting on for 20 or 30 years.

The problem we had before was that it was expensive to buy a parallel computer, then with clusters in the last decade, people could have department-level clustering for HPC–an area that Microsoft’s been delivering software around. But now with the cloud we have a platform that will scale not to tens of nodes, but to hundreds or maybe thousands, which presents the opportunity to run scalable computations very easily and cost-effectively.

Stepping Back for the Bigger Picture

Bill Bain founded ScaleOut Software in 2003 after his experiences at Bell Labs Research, Intel and Microsoft as well as with his three startup ventures, among which were Valence Research where he developed a distributed web load-balancing software product that Microsoft acquired for its Windows Server OS and dubbed Network Load Balancing. He has a Ph.D. from Rice University where he specialized in engineering and parallel computing and holds a number of patents in both distributed computing and computer architecture.

While the focus was initially meant to cover the core technologies behind ScaleOut Software, the conversation during the interview began to drift to some “big picture” issues concerning the cloud and what place it has in HPC—not to mention some of the barriers preventing wider adoption and how such challenges might be overcome in the near future.

Bain reflected on where he’d seen computing head to during his thirty years in HPC stating,

I think we went through a period when HPC became less popular as single-processors got faster in the 90s but with the turn of the century and the peaking out of Moore’s Law people turned back to parallel computing, which is an area we were doing a lot of pioneering work in and the cloud’s the next big thing.

 Although we understood how parallel computing could drive high performance, people didn’t have the hardware so you were stuck with department-level clusters unless you were the government doing nuclear research and could buy a 512-node supercomputer. But most people doing bioinformatics, fluid flow analysis, financial modeling and such were stuck were small department-level computers…So the question becomes who are the players who will make it practical to do HPC in the cloud.

I think you should think of our technology not as some arcane cul de sac of technology that might be moderately interesting; it’s bringing core HPC technologies to the cloud. Whereas I think you’ll find that other players are brining technologies to the cloud but aren’t bringing scalability; who are doing scheduling for the cloud, for instance, those platform approaches are not driving scalability. So the confluence of HPC and cloud I think it now occurring and its bringing well-understood parallel computing techniques to this new platform and making it easy for programmers to get their applications up and running.

There’s one critical piece of the HPC cloud puzzle that’s missing and its low-latency networking; if you look at the public clouds, they use standard gigabit networks and very little can be said about the quality of service in terms of the collocation of multiple virtual servers; these are aspects of parallel computing that are vital and people have spent decades trying to optimize. For instance, at Intel we built these mesh-based supercomputers and invested heavily in technology that came out of Cal Tech in doing cut-through networks in order to drive the latency of networks way down. The reason that was done is because programmers learned that you need low-latency networking to get scalable performance for many applications—any that’s sharing data across the servers needs very fast networking. In the cloud we find off-the-shelf networking. Now, it is starting to look hopeful in the next couple of years to break this performance obstacle as more offer options for low-latency networking. Until then we need to work around this limitation.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Inside ComPatAI, One of LUMI’s First Projects

June 8, 2023

About a year ago, the LUMI supercomputer – a EuroHPC system based at a CSC datacenter in Kajaani, Finland – debuted in third place on the Top500 list (a position it has maintained on the two subsequent lists). Around Read more…

HPC Market will Reach $33B in 2023 and Pass $50B by 2026 – Hyperion Research

June 7, 2023

Perhaps the most interesting slide at Hyperion Research’s annual ISC breakfast HPC market update was one without numbers, presented by research director Mark Nossokoff. He called it “Not Your Father’s HPC.” Just Read more…

For the First Time, UCIe Shares Bandwidth Speeds Between Chiplets

June 7, 2023

The first numbers of the available bandwidth between chiplets is out – UCIe is estimating that chiplet packages could squeeze out communication speeds of 630Gbps, or 0.63Tbps, in a very tight area. That number was shared by the Universal Chiplet Interconnect Express consortium last month... Read more…

Congressional Hearing on U.S. National Quantum Initiative Reauthorization Set for this Week

June 5, 2023

On Wednesday of this week the House Science Committee will hold a hearing as part of the reauthorization effort for the U.S. National Quantum Initiative Act passed in 2018. In recent years, the global race to achieve qua Read more…

Researchers Develop Integrated Photonic Platform Based on Thin-Film Lithium Niobate

June 3, 2023

Researchers are leveraging photonics to develop and scale the hardware necessary to tackle the stringent requirements of quantum information technologies. By exploiting the properties of photonics, researchers point to t Read more…

AWS Solution Channel

Shutterstock 345274925

Streamlining Distributed ML Workflow Orchestration Using Covalent With AWS Batch

This post was contributed by Ara Ghukasyan, Research Software Engineer, and Santosh Kumar Radha, Head of R&D and Product, and William Cunningham, Head of HPC at Agnostiq, with Perminder Singh, Worldwide Partner Solution Architect, Tejas Rakshe, Sr. Read more…

 

Shutterstock 1415788655

New Thoughts on Leveraging Cloud for Advanced AI

Artificial intelligence (AI) is becoming critical to many operations within companies. As the use and sophistication of AI grow, there is a new focus on the infrastructure requirements to produce results fast and efficiently. Read more…

ASC23: Application Results

June 2, 2023

The ASC23 organizers put together a slate of fiendishly difficult applications for the students this year. The apps were a mix of traditional HPC packages, like WRF-Hydro and FVCOM, plus machine learning centric programs Read more…

Inside ComPatAI, One of LUMI’s First Projects

June 8, 2023

About a year ago, the LUMI supercomputer – a EuroHPC system based at a CSC datacenter in Kajaani, Finland – debuted in third place on the Top500 list (a pos Read more…

HPC Market will Reach $33B in 2023 and Pass $50B by 2026 – Hyperion Research

June 7, 2023

Perhaps the most interesting slide at Hyperion Research’s annual ISC breakfast HPC market update was one without numbers, presented by research director Mark Read more…

For the First Time, UCIe Shares Bandwidth Speeds Between Chiplets

June 7, 2023

The first numbers of the available bandwidth between chiplets is out – UCIe is estimating that chiplet packages could squeeze out communication speeds of 630Gbps, or 0.63Tbps, in a very tight area. That number was shared by the Universal Chiplet Interconnect Express consortium last month... Read more…

Intersect360: HPC Market ‘Returning to Stable Growth’

June 1, 2023

The folks at Intersect360 Research released their latest report and market update just ahead of ISC 2023, which was held in Hamburg, Germany, last week. The hea Read more…

Lori Diachin to Lead the Exascale Computing Project as It Nears Final Milestones

May 31, 2023

The end goal is in sight for the multi-institutional Exascale Computing Project (ECP), which launched in 2016 with a mandate from the Department of Energy (DOE) Read more…

At ISC, Sustainable Computing Leaders Discuss HPC’s Energy Crossroads

May 30, 2023

In the wake of SC22 last year, HPCwire wrote that “the conference’s eyes had shifted to carbon emissions and energy intensity” rather than the historical Read more…

Nvidia Announces Four Supercomputers, with Two in Taiwan

May 29, 2023

At the Computex event in Taipei this week, Nvidia announced four new systems equipped with its Grace- and Hopper-generation hardware, including two in Taiwan. T Read more…

Nvidia to Offer a ‘1 Exaflops’ AI Supercomputer with 256 Grace Hopper Superchips

May 28, 2023

We in HPC sometimes roll our eyes at the term “AI supercomputer,” but a new system from Nvidia might live up to the moniker: the DGX GH200 AI supercomputer. Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Contributors

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

ISC 2023 Booth Videos

Cornelis Networks @ ISC23
Dell Technologies @ ISC23
Intel @ ISC23
Lenovo @ ISC23
Microsoft @ ISC23
ISC23 Playlist
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire