How to Scale the Storage and Analysis of Business Data Using a Distributed In-Memory Data Grid

By Nicole Hemsoth

June 4, 2012

by Dr. William Bain, Ph.D, ScaleOut Software, Inc.

A hallmark of the Information Age is the incredible amount of business data that companies have to store and analyze, so-called “big data.” The ability to efficiently search data for important patterns can provide an essential competitive edge.

For example, an e-commerce Web site needs to be able to monitor online shopping carts to see which products are selling quickly. A financial services company needs to hone its equity trading strategy as it optimizes its response to fast-changing market conditions. Businesses that face challenges like these have turned to distributed, in-memory data grids (also called distributed caches) to scale their ability to manage fast-changing data and comb through data to identify patterns and trends requiring a timely response.

Distributed, in-memory data grids (IMDGs) offer two key advantages. First, they store data in memory instead of on disk for fast access, and second, they run seamlessly across a farm of servers to scale performance. But perhaps best of all, they provide a fast, easy to use platform for running near real-time “what if” analyses on the data they store. By breaking the sequential bottleneck, they can take performance to a level that stand-alone database servers and NoSQL stores cannot match.

Software architects and developers often say the following. “OK, I see the advantages, but how do I incorporate a distributed, in-memory data grid into my data storage architecture, and how could it help me to analyze my data?”

Here are three simple steps for building a fast, scalable data storage and analysis solution using a distributed, in-memory data grid.

1. Store fast-changing business data directly in a distributed, in-memory data grid instead of a database server.

Distributed, in-memory data grids, like ScaleOut StateServer, are designed to plug directly into the business logic of today’s enterprise applications and services. By storing data as collections of objects instead of relational database tables, they match the in-memory view of data already used by busi­ness logic. This makes distributed data grids exceptionally easy to integrate into existing applications using simple APIs, which are available for most modern languages, like C#, Java, and C++.

Because distributed IMDGs run on server farms, their storage capacity and throughput scale just by adding more grid servers. When hosted on a large server farm or in the cloud, a distributed, in-memory data grid’s ability to store and quickly access large volumes of data can grow well beyond that for a stand-alone database server.

2. Integrate the distributed, in-memory data grid with database servers as part of an overall storage strategy.

Of course, distributed, in-memory data grids are used to complement and not replace data­base servers, which are the authoritative repositories for transactional data and long-term storage. For example, in an ecommerce Web site, a distributed, in-memory data grid would hold shopping carts to efficiently handle a large workload of online shopping traffic, while a backend database server stores completed transactions, inventory, and customer records. The key to integrating a distributed, in-memory data grid into an enterprise application’s overall storage strategy is to carefully separate application code used for business logic from other code used for data access.

Distributed IMDGs naturally fit into business logic, which usually manages data as objects. This code is also where rapid access to data is needed, and that’s where distributed data grids provide the greatest benefit. In contrast, the data access layer typically focuses on converting objects into a relational form (or vice versa) for storage in database servers.

Interestingly, a distributed, in-memory data grid optionally can be integrated with a database server so that it can automatically access data from the database server if it’s missing from the distributed data grid. This is very useful for certain types of data, such as product or cus­tomer information, which is kept in the database server and just retrieved when needed by the application. However, most types of fast-changing, business logic data can be kept solely in a distributed, in-memory data grid and never written out to a database server.

Application server farm 

3. Analyze grid-based data using simple analysis codes and the “map/reduce” programming pattern.

Once a collection of objects, such as a Web site’s shopping carts or a financial company’s pool of stock histories, has been hosted in a distributed, in-memory data grid, it’s important to be able to scan all of this data for important patterns and trends. Over the last 25 years, re­searchers have developed a powerful, two-step method, now popularly called “map/reduce,” for analyzing large volumes of data in parallel. In the first step, each object in the collection is analyzed for an important pattern of interest by writing and running a simple algorithm that just looks at one object at a time. This algorithm is run in parallel on all objects to quickly analyze all of the data. Next, the results that were generated by running this algorithm are combined to determine an overall result, which hopefully identifies an important trend.

ScaleOut SateServer In-Memory Data Grid
For example, an e-commerce developer could write a simple code which analyzes each shopping cart to rate which product categories are generating the most interest. This code could be run on all shopping carts several times during the day (or perhaps after a marketing blitz on the Web site has been launched) to identify important shopping trends.

Distributed, in-memory data grids offer an ideal platform for analyzing data using this “map/ reduce” programming pattern. Because they store data as memory-based objects, the analy­sis code is very easy to write and debug as a simple “in-memory” code. Programmers do not need to learn parallel programming techniques or understand how the grid works. Also, distributed, in-memory data grids provide the infrastructure needed to automatically run this analysis code on all grid servers in parallel and then combine the results. The net result is that by using a distributed, in-memory data grid, application developers can easily and quickly harness the full scalability of the grid to rapidly discover data patterns and trends that are vital to a company’s success.

ScaleOut StateServer Grid Computing Edition from ScaleOut Software provides an example of built-in map/reduce (called Parallel Method Invocation). This product is an IMDG-based analytics platform that combines the popular map/reduce programming model with property-based object queries, dramatically simplifying the selection and analysis of data stored in its data grid. Fast, automatic, parallel scheduling ensures that memory-based data is analyzed at nearly real-time speeds with minimum data motion. Completely eliminated are batch scheduling and file-I/O overheads, enabling analysis of fast-changing data like network telemetry, market data, or click stream data for fast decision making. In-memory map/reduce dramatically shortens run-times compared to other approaches which analyze disk-based data, such as Hadoop. Built-in high availability ensures that analyses run to completion, even if a server or network connection fails.

As companies become ever more pressed to manage increasing data volumes and quickly respond to changing market conditions, they are turning to distributed, in-memory data grids to obtain the “scalability” boost they need. As clouds become an integral part of en­terprise infrastructures, distributed, in-memory data grids should further prove their value in harnessing the power of scalable computing to provide an essential competitive edge.

Dr. William L. Bain is founder and CEO of ScaleOut Software, Inc. Bill has a Ph.D. in electrical engineering/parallel computing from Rice University, and he has worked at Bell Labs research, Intel, and Microsoft. Bill founded and ran three start-up companies prior to joining Microsoft. In the most recent company (Valence Research), he developed a distributed Web load-balancing software solution that was acquired by Microsoft and is now called Network Load Balanc­ing within the Windows Server operating system. Dr. Bain holds several patents in computer architecture and distributed computing. As a member of the Seattle-based Alliance of Angels, Dr. Bain is actively involved in entrepreneurship and the angel community.

To learn more about ScaleOut Software’s in-memory data grids, please visit www.scaleoutsoftware.com.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Which Schools Produce the Top Coders in the World?

December 8, 2016

Ever wonder which universities worldwide produce the best coders? The answers may surprise you, at least as judged by the results of a competition posted yesterday on the HackerRank blog. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

DDN Enables 50TB/Day Trans-Pacific Data Transfer for Yahoo Japan

December 6, 2016

Transferring data from one data center to another in search of lower regional energy costs isn’t a new concept, but Yahoo Japan is putting the idea into transcontinental effect with a system that transfers 50TB of data a day from Japan to the U.S., where electricity costs a quarter of the rates in Japan. Read more…

By Doug Black

Infographic Highlights Career of Admiral Grace Murray Hopper

December 5, 2016

Dr. Grace Murray Hopper (December 9, 1906 – January 1, 1992) was an early pioneer of computer science and one of the most famous women achievers in a field dominated by men. Read more…

By Staff

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

AWS Embraces FPGAs, ‘Elastic’ GPUs

December 2, 2016

A new instance type rolled out this week by Amazon Web Services is based on customizable field programmable gate arrays that promise to strike a balance between performance and cost as emerging workloads create requirements often unmet by general-purpose processors. Read more…

By George Leopold

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Dec. 1, 2016)

December 1, 2016

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Intel Details AI Hardware Strategy for Post-GPU Age

November 21, 2016

Last week at SC16, Intel revealed its product roadmap for embedding its processors with key capabilities and attributes needed to take artificial intelligence (AI) to the next level. Read more…

By Alex Woodie

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

HPE Gobbles SGI for Larger Slice of $11B HPC Pie

August 11, 2016

Hewlett Packard Enterprise (HPE) announced today that it will acquire rival HPC server maker SGI for $7.75 per share, or about $275 million, inclusive of cash and debt. The deal ends the seven-year reprieve that kept the SGI banner flying after Rackable Systems purchased the bankrupt Silicon Graphics Inc. for $25 million in 2009 and assumed the SGI brand. Bringing SGI into its fold bolsters HPE's high-performance computing and data analytics capabilities and expands its position... Read more…

By Tiffany Trader

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Leading Solution Providers

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Share This