How to Scale the Storage and Analysis of Business Data Using a Distributed In-Memory Data Grid

By Nicole Hemsoth

June 4, 2012

by Dr. William Bain, Ph.D, ScaleOut Software, Inc.

A hallmark of the Information Age is the incredible amount of business data that companies have to store and analyze, so-called “big data.” The ability to efficiently search data for important patterns can provide an essential competitive edge.

For example, an e-commerce Web site needs to be able to monitor online shopping carts to see which products are selling quickly. A financial services company needs to hone its equity trading strategy as it optimizes its response to fast-changing market conditions. Businesses that face challenges like these have turned to distributed, in-memory data grids (also called distributed caches) to scale their ability to manage fast-changing data and comb through data to identify patterns and trends requiring a timely response.

Distributed, in-memory data grids (IMDGs) offer two key advantages. First, they store data in memory instead of on disk for fast access, and second, they run seamlessly across a farm of servers to scale performance. But perhaps best of all, they provide a fast, easy to use platform for running near real-time “what if” analyses on the data they store. By breaking the sequential bottleneck, they can take performance to a level that stand-alone database servers and NoSQL stores cannot match.

Software architects and developers often say the following. “OK, I see the advantages, but how do I incorporate a distributed, in-memory data grid into my data storage architecture, and how could it help me to analyze my data?”

Here are three simple steps for building a fast, scalable data storage and analysis solution using a distributed, in-memory data grid.

1. Store fast-changing business data directly in a distributed, in-memory data grid instead of a database server.

Distributed, in-memory data grids, like ScaleOut StateServer, are designed to plug directly into the business logic of today’s enterprise applications and services. By storing data as collections of objects instead of relational database tables, they match the in-memory view of data already used by busi­ness logic. This makes distributed data grids exceptionally easy to integrate into existing applications using simple APIs, which are available for most modern languages, like C#, Java, and C++.

Because distributed IMDGs run on server farms, their storage capacity and throughput scale just by adding more grid servers. When hosted on a large server farm or in the cloud, a distributed, in-memory data grid’s ability to store and quickly access large volumes of data can grow well beyond that for a stand-alone database server.

2. Integrate the distributed, in-memory data grid with database servers as part of an overall storage strategy.

Of course, distributed, in-memory data grids are used to complement and not replace data­base servers, which are the authoritative repositories for transactional data and long-term storage. For example, in an ecommerce Web site, a distributed, in-memory data grid would hold shopping carts to efficiently handle a large workload of online shopping traffic, while a backend database server stores completed transactions, inventory, and customer records. The key to integrating a distributed, in-memory data grid into an enterprise application’s overall storage strategy is to carefully separate application code used for business logic from other code used for data access.

Distributed IMDGs naturally fit into business logic, which usually manages data as objects. This code is also where rapid access to data is needed, and that’s where distributed data grids provide the greatest benefit. In contrast, the data access layer typically focuses on converting objects into a relational form (or vice versa) for storage in database servers.

Interestingly, a distributed, in-memory data grid optionally can be integrated with a database server so that it can automatically access data from the database server if it’s missing from the distributed data grid. This is very useful for certain types of data, such as product or cus­tomer information, which is kept in the database server and just retrieved when needed by the application. However, most types of fast-changing, business logic data can be kept solely in a distributed, in-memory data grid and never written out to a database server.

Application server farm 

3. Analyze grid-based data using simple analysis codes and the “map/reduce” programming pattern.

Once a collection of objects, such as a Web site’s shopping carts or a financial company’s pool of stock histories, has been hosted in a distributed, in-memory data grid, it’s important to be able to scan all of this data for important patterns and trends. Over the last 25 years, re­searchers have developed a powerful, two-step method, now popularly called “map/reduce,” for analyzing large volumes of data in parallel. In the first step, each object in the collection is analyzed for an important pattern of interest by writing and running a simple algorithm that just looks at one object at a time. This algorithm is run in parallel on all objects to quickly analyze all of the data. Next, the results that were generated by running this algorithm are combined to determine an overall result, which hopefully identifies an important trend.

ScaleOut SateServer In-Memory Data Grid
For example, an e-commerce developer could write a simple code which analyzes each shopping cart to rate which product categories are generating the most interest. This code could be run on all shopping carts several times during the day (or perhaps after a marketing blitz on the Web site has been launched) to identify important shopping trends.

Distributed, in-memory data grids offer an ideal platform for analyzing data using this “map/ reduce” programming pattern. Because they store data as memory-based objects, the analy­sis code is very easy to write and debug as a simple “in-memory” code. Programmers do not need to learn parallel programming techniques or understand how the grid works. Also, distributed, in-memory data grids provide the infrastructure needed to automatically run this analysis code on all grid servers in parallel and then combine the results. The net result is that by using a distributed, in-memory data grid, application developers can easily and quickly harness the full scalability of the grid to rapidly discover data patterns and trends that are vital to a company’s success.

ScaleOut StateServer Grid Computing Edition from ScaleOut Software provides an example of built-in map/reduce (called Parallel Method Invocation). This product is an IMDG-based analytics platform that combines the popular map/reduce programming model with property-based object queries, dramatically simplifying the selection and analysis of data stored in its data grid. Fast, automatic, parallel scheduling ensures that memory-based data is analyzed at nearly real-time speeds with minimum data motion. Completely eliminated are batch scheduling and file-I/O overheads, enabling analysis of fast-changing data like network telemetry, market data, or click stream data for fast decision making. In-memory map/reduce dramatically shortens run-times compared to other approaches which analyze disk-based data, such as Hadoop. Built-in high availability ensures that analyses run to completion, even if a server or network connection fails.

As companies become ever more pressed to manage increasing data volumes and quickly respond to changing market conditions, they are turning to distributed, in-memory data grids to obtain the “scalability” boost they need. As clouds become an integral part of en­terprise infrastructures, distributed, in-memory data grids should further prove their value in harnessing the power of scalable computing to provide an essential competitive edge.

Dr. William L. Bain is founder and CEO of ScaleOut Software, Inc. Bill has a Ph.D. in electrical engineering/parallel computing from Rice University, and he has worked at Bell Labs research, Intel, and Microsoft. Bill founded and ran three start-up companies prior to joining Microsoft. In the most recent company (Valence Research), he developed a distributed Web load-balancing software solution that was acquired by Microsoft and is now called Network Load Balanc­ing within the Windows Server operating system. Dr. Bain holds several patents in computer architecture and distributed computing. As a member of the Seattle-based Alliance of Angels, Dr. Bain is actively involved in entrepreneurship and the angel community.

To learn more about ScaleOut Software’s in-memory data grids, please visit

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate scientists the ability to use machine learning to identify e Read more…

By Rob Farber

Mellanox Reacts to Activist Investor Pressures in Letter to Shareholders

March 16, 2018

Activist investor Starboard Value has been exerting pressure on Mellanox Technologies to increase its returns. In response, the high-performance networking company on Monday, March 12, published a letter to shareholders outlining its proposal for a May 2018 extraordinary general meeting (EGM) of shareholders and highlighting its long-term growth strategy and focus on operating margin improvement. Read more…

By Staff

Quantum Computing vs. Our ‘Caveman Newtonian Brain’: Why Quantum Is So Hard

March 15, 2018

Quantum is coming. Maybe not today, maybe not tomorrow, but soon enough. Within 10 to 12 years, we’re told, special-purpose quantum systems will enter the commercial realm. Assuming this happens, we can also assume that quantum will, over extended time, become increasingly general purpose as it delivers mind-blowing power. Read more…

By Doug Black

HPE Extreme Performance Solutions

Achieve Optimal Performance at Scale with High Performance Fabrics for HPC

High Performance Computing (HPC) is unlocking a new era of speed and productivity to fuel business transformation. Rapid advancements in HPC capabilities are helping organizations operate faster and more effectively than ever, but in today’s fast-paced marketplace, a new generation of technologies is required to reach greater scalability and cost-efficiency. Read more…

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise IT in its willingness to outsource computational power. The m Read more…

By Chris Downing

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Stephen Hawking, Legendary Scientist, Dies at 76

March 14, 2018

Stephen Hawking passed away at his home in Cambridge, England, in the early morning of March 14; he was 76. Born on January 8, 1942, Hawking was an English theo Read more…

By Tiffany Trader

Hyperion Tackles Elusive Quantum Computing Landscape

March 13, 2018

Quantum computing - exciting and off-putting all at once - is a kaleidoscope of technology and market questions whose shapes and positions are far from settled. Read more…

By John Russell

Part Two: Navigating Life Sciences Choppy HPC Waters in 2018

March 8, 2018

2017 was not necessarily the best year to build a large HPC system for life sciences say Ari Berman, VP and GM of consulting services, and Aaron Gardner, direct Read more…

By John Russell

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

SciNet Launches Niagara, Canada’s Fastest Supercomputer

March 5, 2018

SciNet and the University of Toronto today unveiled "Niagara," Canada's most-powerful supercomputer, comprising 1,500 dense Lenovo ThinkSystem SD530 high-perfor Read more…

By Tiffany Trader

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in wha Read more…

By John Russell

World Record: Quantum Computer with 46 Qubits Simulated

December 18, 2017

Scientists from the Jülich Supercomputing Centre have set a new world record. Together with researchers from Wuhan University and the University of Groningen, Read more…

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This