Distributed Data Grids and the Cloud: A Chat With ScaleOut’s Dr. William Bain

By Nicole Hemsoth

October 27, 2010

Distributed data grids, which are also known as distributed caches, store data in memory across a pool of servers (which could include an HPC grid or in a web or ecommerce farm like at Amazon.com) with a distributed cache for holding on to fluid, fast-moving data. This technology makes any company offering it well-positioned to serve a number of verticals, both in the traditional and non-traditional HPC space, including financial services and large-scale ecommerce organizations.

One company that has been particularly visible on the distributed data grid front for both ecommerce and financial services in particular has been ScaleOut Software, an eight-year-old company that has seen massive growth, due most recently to rising interest from financial institutions.

As Dr. William Bain, founder and CEO of ScaleOut noted of the interest from financial services–a veritcal marked by its need for near real-time results, “Distributed data grids have evolved from a basic data cache into a sophisticated analysis platform to track and process massive market volumes. The ability to quickly and efficiently perform complex analyses on historical and real-time data has become vital to top Wall Street firms seeking competitive advantage.”

The company has garnered signficant market share from the financial side of the spectrum but the talk about distributed data grids has been emerging again, in part due to the more widespread adoption of the cloud in this and other areas coupled with the massive explosion in sheer volumes of data generated in real time that needs to be analyzed in near real-time.

One reason why distributed data grids have received so much attention is because with traditional modes of data storage, there are built-in causes for bottlenecks that prevent scalability that make these less attractive options for some. ScaleOut Software’s founder and CEO, William Bain notes that “bringing techniques from parallel computing that have been in the works for two or three decades to this problem” is relieving some of the inherent weaknesses of traditional storage and is optimizing performance due to refinements in how data is stored, accessed and used.

Dr. Bain spent some time speaking with us about distributed data grids and typical use cases recently and put some of the technology in context—while providing a glimpse into how something that’s been around for some time is now receiving an added boost from the cloud.

Let’s put it in this context; imagine you have hundreds of thousands of users accessing a popular site. The site needs to have the data they’re storing and updating rapidly (as would happen with a shopping cart) kept in a scalable store since this is important to keeping their response times fast. Distributed caches have been used in this way for about 7 years and they’re becoming vital now for websites to scale performance.

In the area of financial services this technology allows the analyst the ability to store data that can be easily stored and then ready for analysis. There are several applications that are written for this area that require distributed data grids to achieve the scalable performance they need.

What’s driving this is that the amount of data being analyzed is growing very rapidly and the latency issues involved means you have to have a scalable platform for analyzing data in real time. This is especially the case for large companies that are doing financial analysis; the kinds of applications these people are running include algorithmic trading, stock histories that predict future performance of stock strategy, and so o and those are a perfect fit to a scalable data store.

The key trends we’re seeing that are making this exciting is one, the value of storing data in memory can dramatically improve performance over other approaches such as doing a map reduce-style computation on data based in a database because in-memory storage eliminates the latency issues caused during transfer.

The second important part of this is the cloud. – the cloud is providing a widely-available platform for hosting these applications on a large pool of servers that are only rented for the time that the application is running. There is a confluence of technologies that will drive this technology area to the forefront of attention because of the opportunity it has created that we’ve been waiting on for 20 or 30 years.

The problem we had before was that it was expensive to buy a parallel computer, then with clusters in the last decade, people could have department-level clustering for HPC–an area that Microsoft’s been delivering software around. But now with the cloud we have a platform that will scale not to tens of nodes, but to hundreds or maybe thousands, which presents the opportunity to run scalable computations very easily and cost-effectively.

Stepping Back for the Bigger Picture

Bill Bain founded ScaleOut Software in 2003 after his experiences at Bell Labs Research, Intel and Microsoft as well as with his three startup ventures, among which were Valence Research where he developed a distributed web load-balancing software product that Microsoft acquired for its Windows Server OS and dubbed Network Load Balancing. He has a Ph.D. from Rice University where he specialized in engineering and parallel computing and holds a number of patents in both distributed computing and computer architecture.

While the focus was initially meant to cover the core technologies behind ScaleOut Software, the conversation during the interview began to drift to some “big picture” issues concerning the cloud and what place it has in HPC—not to mention some of the barriers preventing wider adoption and how such challenges might be overcome in the near future.

Bain reflected on where he’d seen computing head to during his thirty years in HPC stating,

I think we went through a period when HPC became less popular as single-processors got faster in the 90s but with the turn of the century and the peaking out of Moore’s Law people turned back to parallel computing, which is an area we were doing a lot of pioneering work in and the cloud’s the next big thing.

 Although we understood how parallel computing could drive high performance, people didn’t have the hardware so you were stuck with department-level clusters unless you were the government doing nuclear research and could buy a 512-node supercomputer. But most people doing bioinformatics, fluid flow analysis, financial modeling and such were stuck were small department-level computers…So the question becomes who are the players who will make it practical to do HPC in the cloud.

I think you should think of our technology not as some arcane cul de sac of technology that might be moderately interesting; it’s bringing core HPC technologies to the cloud. Whereas I think you’ll find that other players are brining technologies to the cloud but aren’t bringing scalability; who are doing scheduling for the cloud, for instance, those platform approaches are not driving scalability. So the confluence of HPC and cloud I think it now occurring and its bringing well-understood parallel computing techniques to this new platform and making it easy for programmers to get their applications up and running.

There’s one critical piece of the HPC cloud puzzle that’s missing and its low-latency networking; if you look at the public clouds, they use standard gigabit networks and very little can be said about the quality of service in terms of the collocation of multiple virtual servers; these are aspects of parallel computing that are vital and people have spent decades trying to optimize. For instance, at Intel we built these mesh-based supercomputers and invested heavily in technology that came out of Cal Tech in doing cut-through networks in order to drive the latency of networks way down. The reason that was done is because programmers learned that you need low-latency networking to get scalable performance for many applications—any that’s sharing data across the servers needs very fast networking. In the cloud we find off-the-shelf networking. Now, it is starting to look hopeful in the next couple of years to break this performance obstacle as more offer options for low-latency networking. Until then we need to work around this limitation.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputing Helps Explain the Milky Way’s Shape

September 30, 2022

If you look at the Milky Way from “above,” it almost looks like a cat’s eye: a circle of spiral arms with an oval “iris” in the middle. That iris — a starry bar that connects the spiral arms — has two stran Read more…

Top Supercomputers to Shake Up Earthquake Modeling

September 29, 2022

Two DOE-funded projects — and a bunch of top supercomputers — are converging to improve our understanding of earthquakes and enable the construction of more earthquake-resilient buildings and infrastructure. The firs Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's annual developer gala last held in 2016. The chipmaker cut t Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely mimic the behavior of the human brain through the use of co Read more…

DOE Announces $42M ‘COOLERCHIPS’ Datacenter Cooling Program

September 28, 2022

With massive machines like Frontier guzzling tens of megawatts of power to operate, datacenters’ energy use is of increasing concern for supercomputer operations – and particularly for the U.S. Department of Energy ( Read more…

AWS Solution Channel

Shutterstock 1818499862

Rearchitecting AWS Batch managed services to leverage AWS Fargate

AWS service teams continuously improve the underlying infrastructure and operations of managed services, and AWS Batch is no exception. The AWS Batch team recently moved most of their job scheduler fleet to a serverless infrastructure model leveraging AWS Fargate. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1166887495

Improving Insurance Fraud Detection using AI Running on Cloud-based GPU-Accelerated Systems

Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…

Do You Believe in Science? Take the HPC Covid Safety Pledge

September 28, 2022

ISC 2022 was back in person, and the celebration was on. Frontier had been named the first exascale supercomputer on the Top500 list, and workshops, poster sessions, paper presentations, receptions, and booth meetings we Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's ann Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely Read more…

HPE to Build 100+ Petaflops Shaheen III Supercomputer

September 27, 2022

The King Abdullah University of Science and Technology (KAUST) in Saudi Arabia has announced that HPE has won the bid to build the Shaheen III supercomputer. Sh Read more…

Intel’s New Programmable Chips Next Year to Replace Aging Products

September 27, 2022

Intel shared its latest roadmap of programmable chips, and doesn't want to dig itself into a hole by following AMD's strategy in the area.  "We're thankfully not matching their strategy," said Shannon Poulin, corporate vice president for the datacenter and AI group at Intel, in response to a question posed by HPCwire during a press briefing. The updated roadmap pieces together Intel's strategy for FPGAs... Read more…

Intel Ships Sapphire Rapids – to Its Cloud

September 27, 2022

Intel has had trouble getting its chips in the hands of customers on time, but is providing the next best thing – to try out those chips in the cloud. Delayed chips such as Sapphire Rapids server processors and Habana Gaudi 2 AI chip will be available on a platform called the Intel Developer Cloud, which was announced at the Intel Innovation event being held in San Jose, California. Read more…

More Details on ‘Half-Exaflop’ Horizon System, LCCF Emerge

September 26, 2022

Since 2017, plans for the Leadership-Class Computing Facility (LCCF) have been underway. Slated for full operation somewhere around 2026, the LCCF’s scope ext Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

Leading Solution Providers

Contributors

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

September 8, 2022

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire