Startup Brings HPC to Big Data Analytics

By Michael Feldman

June 16, 2011

For all the accolades one hears about German engineering, there are few IT vendors native to that country. Recently though, we got the opportunity to talk with one such company, ParStream, a Cologne-based startup that has developed a bleeding-edge CPU/GPU-based analytics platform that marries high performance computing to big data.

ParStream, whose official company name is empulse GmbH, was founded four years ago by Michael Hummel and Joerg Bienert, who share the title of managing director. The duo funded the venture themselves but were able to subsequently attract some external investment. That was enough to develop the initial software and appliance products, and even snag a couple of paying customers. Right now they are looking for venture capital to move the business into the fast lane.

ParStream was initially formed around the idea of doing IT consulting and application development, much like the work Hummel and Bienert performed at Accenture, where the two had met. But about three years ago, their newly hatched company got a contract from the German tourism industry to build a search engine for a travel package offering. They wanted the application to be able to search through about 6 billion data records against 20 parameters in less that 100 ms. Unfortunately, most of the current database technology, based on decades-old software architectures, doesn’t provide anything close to the level of parallelism required to digest these big databases under such strict time constraints. Thus was born ParStream and its new mission: to do big data analytics with an HPC flair.

Hummel and Bienert developed their own database software kernel that was able to handle the tourism industry’s search problem on conventional hardware, that is, x86 clusters. According to Bienert, they quickly realized the solution they came up with could be generalized. “Afterward, we looked at other industries and found that this big data challenge was everywhere, so we decided to make a product out of it,” he told HPCwire.

Hummel and Bienert figured any business that deals in super-sized datasets and has a need for interactive analysis would be able to use this technology. The main technological challenge is to be able to run many concurrent queries on the data and deliver the results in real-time or near real-time. This includes such applications such as web analytics, bioinformatics, intelligent ad serving, algorithmic trading, fraud detection, market research, and smart energy metering, among many others.

As suggested by its name, the ParStream software performs parallel streaming of data structures. In this case they are focused on structured data, but of such a size that they can have thousands of columns and millions, or even billions, of rows. According to the company, their offering performs, on average, about 35 times faster than traditional database products.

The secret is to parallelize each query such that it can be processed simultaneously on many cores spread across multiple nodes. In a cluster environment, the data is stored on individual servers in a “shared nothing” environment. Since there is little interprocess communication, the performance can scale linearly with the cluster size; doubling processors or nodes should double throughput.

They haven’t tested ParStream on a petabyte-sized system yet, but according to Bienert, there is no inherent limitation in the software that would prevent it from scaling to that level. To be fair, a lot of other analytics engines also operate in parallel, but in many cases that means multiple queries can be run simultaneously, but each requires its own processor.

Newer technology such as Google’s MapReduce and its open-source Hadoop derivative, are able to decompose the query into many independent pieces, just like the ParStream software. But according to Bienert, the MapReduce technology is more suited for batch-mode processing, rather than real-time analysis. Three of the ParStream’s potential clients had tried the MapReduce scheme and encountered those limitations. In fact, last year Google itself abandoned MapReduce for query-type searching in favor of a higher performance technology called Dremel.

It’s not just about query parallelization though. ParStream’s real secret sauce is their index structure. Like many traditional relational databases, the bitmap index is in compressed form to save space in memory. But according to Bienert, the ParStream index can be used while compressed; there’s no need for a compute- and memory-intensive decompression step to operate on it. “This is the heart of ParStream and what makes it extremely fast,” he says.

That same technology makes it extremely efficient from a hardware standpoint. Bienert says in a production environment, where the other database solutions would require about 400 servers, ParStream only needs 20 and executes many times faster.

They initially wrote their software to run on generic 64-bit x86-based Linux platforms — single nodes and clusters. Later they found their parallel approach and bitmap structure was very well-suited to general-purpose GPUs, which provided a speed up of 8-10x, compared to the CPU-only version.

Not just any GPU would do though. The ParStream software required error corrected code (ECC) memory since it was critical to maintain the integrity of the bitmap index and other compressed data structures in memory. Arbitrarily flipping bits would not do. With NVIDIA’s Fermi (Tesla 20-series) GPU, ParStream got that critical ECC support.

For the GPU-accelerated version, the company has to provide a custom applicance because the configuration is a little tricky for the software’s needs. In fact, each GPU deployment is a custom job at this point. The specific configuration (mix of Fermi cards, x86 processors, and memory capacity) is based on application requirements associated with throughput, database size, and so on. A single node can contain up to four CPUs and eight GPU cards.

At this point, the company is building up proof points for their technology. They have two existing customers in Europe in the eCommerce sector, and five additional prospects across multiple industries running proof-of-concept deployments.

Early results look encouraging. A German customer with a web analytics application originally took three to five minutes on a “large cluster” to analyze billions of records using their traditional database solution. After some tuning of the ParStream software, the customer was able to perform the same query calculation in 15 ms, and on just four x86 servers. The most difficult part was convincing the customer that the solution was spitting out valid results “instantaneously.” The company is currently in the process of migrating their whole infrastructure to ParStream, says Bienert.

In two other instances where interactive analytics was the driving goal, ParStream delivered impressive performance results. A market research firm with 20 million records (1000 columns apiece) was able to perform 5000 queries in just 5 seconds, and a climate research center in Germany was able analyze 3 billion records in 100 milliseconds (ms) as part of an effort to identify hurricane risk. Each of these applications was run on a single server using the ParStream offering.

Bienert believes ParStream’s high throughput, low-latency analytics has a significant edge on its competition at this point. Other up-and-coming big data vendors, like Vertica and EXASOL, are also touting highly parallel architectures, but as of today Bienert thinks they’re alone in offering GPU-based acceleration and their unique compressed data indexing scheme. The company is hoping that’s enough to attract some savvy investors.

In the meantime they’ll be hitting the trade show circuit. Hummel introduced the technology last September at NVIDIA’s GPU Technology Conference, where the company was selected as “One to Watch” by the GPU maker. ParStream’s first exhibition of their offerings will be at the International Supercomputing Conference in Hamburg, Germany next week, where they hope to wow the HPC faithful.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AI in the News: Rao in at Intel, Ng out at Baidu, Nvidia on at Tencent Cloud

March 26, 2017

Just as AI has become the leitmotif of the advanced scale computing market, infusing much of the conversation about HPC in commercial and industrial spheres, it also is impacting high-level management changes in the industry. Read more…

By Doug Black

Scalable Informatics Ceases Operations

March 23, 2017

On the same day we reported on the uncertain future for HPC compiler company PathScale, we are sad to learn that another HPC vendor, Scalable Informatics, is closing its doors. Read more…

By Tiffany Trader

‘Strategies in Biomedical Data Science’ Advances IT-Research Synergies

March 23, 2017

“Strategies in Biomedical Data Science: Driving Force for Innovation” by Jay A. Etchings is both an introductory text and a field guide for anyone working with biomedical data. Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HFT Firms Turn to Co-Location to Gain Competitive Advantage

High-frequency trading (HFT) is a high-speed, high-stakes world where every millisecond matters. Finding ways to execute trades faster than the competition translates directly to greater revenue for firms, brokerages, and exchanges. Read more…

Google Launches New Machine Learning Journal

March 22, 2017

On Monday, Google announced plans to launch a new peer review journal and “ecosystem” Read more…

By John Russell

Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging

March 22, 2017

Peering inside semiconductor chips using x-ray imaging isn’t new, but the technique hasn’t been especially good or easy to accomplish. Read more…

By John Russell

LANL Simulation Shows Massive Black Holes Break ‘Speed Limit’

March 21, 2017

A new computer simulation based on codes developed at Los Alamos National Laboratory (LANL) is shedding light on how supermassive black holes could have formed in the early universe contrary to most prior models which impose a limit on how fast these massive ‘objects’ can form. Read more…

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

New Japanese Supercomputing Project Targets Exascale

March 14, 2017

Another Japanese supercomputing project was revealed this week, this one from emerging supercomputer maker, ExaScaler Inc., and Keio University. The partners are working on an original supercomputer design with exascale aspirations. Read more…

By Tiffany Trader

Nvidia Debuts HGX-1 for Cloud; Announces Fujitsu AI Deal

March 9, 2017

On Monday Nvidia announced a major deal with Fujitsu to help build an AI supercomputer for RIKEN using 24 DGX-1 servers. Read more…

By John Russell

HPC4Mfg Advances State-of-the-Art for American Manufacturing

March 9, 2017

Last Friday (March 3, 2017), the High Performance Computing for Manufacturing (HPC4Mfg) program held an industry engagement day workshop in San Diego, bringing together members of the US manufacturing community, national laboratories and universities to discuss the role of high-performance computing as an innovation engine for American manufacturing. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Leading Solution Providers

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This