Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
March 28, 2008

Transforming Big Data

by John E. West

The increasing spread of sophisticated instrumentation, and the dramatic increase in the capability and use of computers in all fields of human endeavor, have led to a dramatic growth in the amount of data we humans collect. A recent study by IDC puts the amount of data produced in 2007 worldwide at 281 exabytes, a 56 percent increase over the amount of data produced in 2006. While that number itself is subject to some debate, the trends are real.

What kind of data is this? A lot of it, according to IDC’s report, is digital imagery, both moving and still. But much of it is data measured or captured as a result of scientific and business processes: data streams related to national security and homeland defense, personal and organizational financial transactions, massive space and earth observing systems, and so on. The amount of data produced by the financial markets alone quadrupled last year.

But data isn’t information — in order to influence a course of action data have to be processed, assimilated and put in context for the people or systems making decisions. The field of data intensive computing, which has been around for a while now, is all about developing the systems and software that can facilitate this data transformation.

At the National HPCC Conference in Rhode Island this week, John Grosh, director of the Center for Applied Scientific Computing at Lawrence Livermore National Lab, gave a talk that touched on some of the work Livermore is doing in this area. The Livermore team is working, as are many others in the field, to identify the machine architectures, software design points, and tools needed to enable rapid processing of stored data in applications ranging from security and intelligence to climate science. The issue that they are addressing, even with “small” datasets in the terabytes, is that the interaction with disks in a traditionally architected HPC system can be quite painful when I/O performance matters. Some vendors in HPC are addressing this concern by building large shared memory systems to hold the data in-memory. This is an effective solution, but it can also be expensive. The Livermore team is looking at alternative architectures from the business intelligence (BI) community, along with technologies like NVRAM (non-volatile memory), flash memory drives, and so on.

As Grosh pointed out, the shift that is needed goes to the core of system design. Disk vendors have largely focused on capacity rather than bandwidth, and many supercomputing applications avoid I/O as much as possible. In data intensive applications, this view is turned on its head: it’s all about moving stored data in for processing, and pushing transformed data out. According to Grosh, NVRAM technology may be very important on the hardware front in the future of data intensive supercomputing. It offers an architecturally “clean slate” that doesn’t carry any of the design culture of disk storage along with it, and it may be able to fill the gap between DRAM and disk with respect to both price per capacity and access speed.

Pervasive Software is one of the companies working on the software front of the data intensive computing space, developing software architectures to support intensive analysis of large data stores. Pervasive’s DataRush product is designed primarily for single address space environments of the kind you’ll find in multi-socket, multicore nodes on today’s hardware. The framework is based on a dataflow model, written in Java, and provides high level primitives that mask the complexity and details of the parallel implementation. According to Pervasive CTO Mike Hoskins, DataRush is a “next generation massively parallel data pump.”

There is a lot in that paragraph to give lifetime HPTC professionals a chill. “Masking complexity” has long been synonymous with prohibiting access to the very details that determine performance. And Java? Isn’t that too slow?

Hoskins stresses the need to act on the reality that the value elements in supercomputing are not the machines anymore, but the people. “A lot of the supercomputing industry is stuck in a bit of a time warp,” said Hoskins speaking to HPCwire in April of 2007. “I started with mainframes and assembly programming. In those days machines were expensive and humans were cheap. Now, it’s turned around completely. The constant focus on machine performance really misses the boat.”

Pervasive is targeting DataRush — at least initially — in areas like business, bioinformatics, and finance; domains where Java programming is already popular. And recent versions of Java have overcome many of the earlier performance problems associated with garbage collection, making it a viable option for in some cases.

Jim Falgout, solutions architect with Pervasive, explains that a core advantage of the DataRush approach with Java lies in its ability to dynamically adjust to available resources. Data flows and processing steps are described in an XML scripting language that moves data through the system, and transforms it by the application of “operators” such as sort, join, average, and merge. (As of later this year the XML description can be replaced by a Java description of the dataflow.) The framework includes basic operators, and users add new operators to support their specific needs through an SDK. DataRush dynamically assembles the bits of code it needs at runtime and, if desired, users can help the software adapt to varying amounts of available processing power and varying problem sets by binding in operators and operator implementations that are better suited for the situation at hand. This is reminiscent of the poly- or multi-algorithmic work that has been going on in traditional HPTC for some time, and has the potential to offer real advantages.

An article in Java Developer Journal this week by Pervasive’s Falgout outlines an application of DataRush dealing with large volumes of data, and the highlights some potential advantages that processing outside an RDBMS offers for structured analytic queries. In the article Falgout describes an effort to de-duplicate a database of tens of millions of records. At the end of one month of development and tuning, Falgout’s team was able to demonstrate a record comparison rate of more than one million candidate pairs per second running on a four way quad-core Xeon HP Proliant node.

Another interesting outcome arose from tuning the report used to roll up results for the customer. Their customer had developed a SQL query to avoid presenting duplicate decision pairs in selecting which member of a possible duplicate set should “win.” The query ran in 3 hours on 14 million matched pairs. Using DataRush Falgout’s team coded an operator in Java to perform the logic previously handled in the SQL, and reduced the runtime to only 22 seconds.

There is a lot that we still don’t know about the architectures, tools and techniques needed to effectively process the data we are amassing at work and at play in much of the first and second world. But, as with multicore programming techniques, data intensive computing provides the HPC community the opportunity to leverage products and models developed in the commodity community to advance the state of the art in our own field.