IBM Looks to Tap Massive Data Streams

By John E. West

July 3, 2008

A 2007 IDC study estimates that the world generated 161 billion gigabytes of digital information, and that the pace of increase in the information we deal with will outstrip our capacity to store it by 2010 (see insideHPC post). All this data — conversations, television programs, music, movies, stock trades, commodities values, medical images, shopping lists, and test results — isn’t just a statistical artifact. It is the stuff that drives the scientific, economic, and social engines of our society.

I spoke with Nagui Halim, director of event and streaming systems at IBM Research, about IBM’s stream computing efforts and where he sees the field going. He framed the problem for me by pointing out the fundamental difference between the computing that most of us do every day, and stream computing: “In traditional computing the machine dictates the pace at which things gets done. In stream computing, the machine’s job is to figure out what’s going on in the real world in real time.”

This sounds fairly innocuous, but when you try to put this principle into practice, the challenges start to add up. For example, according to Halim the financial services industry generates five million data items per second. One way to make money in the markets is by exploiting information asymmetries, that is, cases where you know something that most people don’t. In some situations these asymmetries only exist for a few seconds. So real-time systems supporting these applications have to be able to consume, analyze, and react to the millions of pieces of data they are seeing in a just few milliseconds, and then move on to the next 5 millions pieces of information. The same kinds of demands exist in real-time monitoring of complex industrial processes such as chip manufacturing, credit card fraud detection, commercial flight tracking, and so on.

Of course these data streams didn’t spring up overnight, and companies have experience building solutions to handle all of this information. The efforts to date have all been focused on solving specific problems in specific businesses. Halim’s goal is to take what’s been learned from the various point solutions that industry has developed to deal with information flows as they happen, and build a generalized infrastructure and body of knowledge that will accelerate the adoption of stream computing by researchers and individuals alike. Halim and IBM are working the whole solution, from hardware, operating systems, and compilers to middleware and tools.

Although this is still a project in IBM’s labs, the existing stream computing software base includes millions of lines of code and over 300 patents. Many books and papers have been written about the work they are doing. Now, the stream environment that IBM has built is being tested in the real world. One of those pilots, with TD Bank Financial Group in Canada, is using a Blue Gene and IBM’s stream computing software to support trading operations (see IBM’s press release from April).

IBM is relying on its stable of HPC hardware to provide the computational horsepower needed to support large scale stream computing, but not in the way you might expect. “The general model for HPC is to take a large problem and split it up into pieces. In stream computing we’re organizing the computation in quite a different way,” says Halim.

According to him, many stream computing applications can be organized as a pipeline, subdividing supercomputers into pools of processors that each deal with the needs of a specific stage of the pipeline, taking the data that comes in and transforming it for further action in a subsequent stage. For example, in a voice processing application, the stages might be organized to first decrypt individual voice packets, assemble packets into a conversation, convert the conversation to text, and then analyze the text looking for key phrases of interest that might alert a human or spark additional action and analyses. Depending upon the amount of voice information coming in, you might need 10s, 100s, or 1,000s of processors to handle the load.

But where Halim’s team is really focused is on the software infrastructure needed to address stream computing needs in a universal way. The goal is to provide a general-purpose model of creating a stream application from individual data processing components that can be assembled to produce the desired results. The stream environment needs to be able to adapt to the information it is seeing, allowing it to focus on areas of interest and rapidly move past uninteresting features or trends. The environment also must be able to adapt when the user’s needs change, and react to changes in the resources (both human and computer) available to work on the problem.

Importantly, IBM is designing the stream infrastructure to be useful to non-experts from the ground up, which would be a welcome change from much of the software that is written for supercomputers.

One facet of this strategy is that the environment can run applications on resources varying from laptops to supercomputers, automatically taking advantage of the computational attributes of the hardware available to it, and with the ability to schedule tasks around hardware failures. The stream software environment also includes composable generic components (e.g., join operators, dominant contributor in an array, change point detection, and so on) that will make the system useful right out of the box, allowing non-experts to do useful work with a short learning curve.

And out of the box it will come. Although this is still a research project and there remains much work to be done before the product is shrink wrapped, Halim and his team are motivated by an expansive view that “stream computing is not just a new computing model, it is a new scientific instrument.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Hyperion: AI-driven HPC Industry Continues to Push Growth Projections

November 21, 2019

Three major forces – AI, cloud and exascale – are combining to raise the HPC industry to heights exceeding expectations. According to market study results released this week by Hyperion Research at SC19 in Denver, Read more…

By Doug Black

At SC19: Bespoke Supercomputing for Climate and Weather

November 20, 2019

Weather and climate applications are some of the most important uses of HPC – a good model can save lives, as well as billions of dollars. But many weather and climate models struggle to run efficiently in their HPC en Read more…

By Oliver Peckham

Microsoft, Nvidia Launch Cloud HPC Service

November 20, 2019

Nvidia and Microsoft have joined forces to offer a cloud HPC capability based on the GPU vendor’s V100 Tensor Core chips linked via an InfiniBand network scaling up to 800 graphics processors. The partners announced Read more…

By George Leopold

Hazra Retiring from Intel Data Center Group, Successor Not Known

November 20, 2019

Rajeeb Hazra, corporate VP of Intel’s Data Center Group and GM for the Enterprise and Government Group, is retiring after more than 24 years at the company. At this writing, his successor is unknown. An earlier story on... Read more…

By Doug Black

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU-accelerated computing. In recent years, AI has joined the s Read more…

By John Russell

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Data Management – The Key to a Successful AI Project

 

Five characteristics of an awesome AI data infrastructure

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19!]

AI is powered by data

While neural networks seem to get all the glory, data is the unsung hero of AI projects – data lies at the heart of everything from model training to tuning to selection to validation. Read more…

SC19 Student Cluster Competition: Know Your Teams

November 19, 2019

I’m typing this live from Denver, the location of the 2019 Student Cluster Competition… and, oh yeah, the annual SC conference too. The attendance this year should be north of 13,000 people, with the majority attende Read more…

By Dan Olds

Hyperion: AI-driven HPC Industry Continues to Push Growth Projections

November 21, 2019

Three major forces – AI, cloud and exascale – are combining to raise the HPC industry to heights exceeding expectations. According to market study results r Read more…

By Doug Black

At SC19: Bespoke Supercomputing for Climate and Weather

November 20, 2019

Weather and climate applications are some of the most important uses of HPC – a good model can save lives, as well as billions of dollars. But many weather an Read more…

By Oliver Peckham

Hazra Retiring from Intel Data Center Group, Successor Not Known

November 20, 2019

Rajeeb Hazra, corporate VP of Intel’s Data Center Group and GM for the Enterprise and Government Group, is retiring after more than 24 years at the company. At this writing, his successor is unknown. An earlier story on... Read more…

By Doug Black

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

ScaleMatrix and Nvidia Launch ‘Deploy Anywhere’ DGX HPC and AI in a Controlled Enclosure

November 18, 2019

HPC and AI in a phone booth: ScaleMatrix and Nvidia announced today at the SC19 conference in Denver a joint offering that puts up to 13 petaflops of Nvidia DGX Read more…

By Doug Black

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This