Visit additional Tabor Communication Publications
July 03, 2008
A 2007 IDC study estimates that the world generated 161 billion gigabytes of digital information, and that the pace of increase in the information we deal with will outstrip our capacity to store it by 2010 (see insideHPC post). All this data -- conversations, television programs, music, movies, stock trades, commodities values, medical images, shopping lists, and test results -- isn't just a statistical artifact. It is the stuff that drives the scientific, economic, and social engines of our society.
I spoke with Nagui Halim, director of event and streaming systems at IBM Research, about IBM's stream computing efforts and where he sees the field going. He framed the problem for me by pointing out the fundamental difference between the computing that most of us do every day, and stream computing: "In traditional computing the machine dictates the pace at which things gets done. In stream computing, the machine's job is to figure out what's going on in the real world in real time."
This sounds fairly innocuous, but when you try to put this principle into practice, the challenges start to add up. For example, according to Halim the financial services industry generates five million data items per second. One way to make money in the markets is by exploiting information asymmetries, that is, cases where you know something that most people don't. In some situations these asymmetries only exist for a few seconds. So real-time systems supporting these applications have to be able to consume, analyze, and react to the millions of pieces of data they are seeing in a just few milliseconds, and then move on to the next 5 millions pieces of information. The same kinds of demands exist in real-time monitoring of complex industrial processes such as chip manufacturing, credit card fraud detection, commercial flight tracking, and so on.
Of course these data streams didn't spring up overnight, and companies have experience building solutions to handle all of this information. The efforts to date have all been focused on solving specific problems in specific businesses. Halim's goal is to take what's been learned from the various point solutions that industry has developed to deal with information flows as they happen, and build a generalized infrastructure and body of knowledge that will accelerate the adoption of stream computing by researchers and individuals alike. Halim and IBM are working the whole solution, from hardware, operating systems, and compilers to middleware and tools.
Although this is still a project in IBM's labs, the existing stream computing software base includes millions of lines of code and over 300 patents. Many books and papers have been written about the work they are doing. Now, the stream environment that IBM has built is being tested in the real world. One of those pilots, with TD Bank Financial Group in Canada, is using a Blue Gene and IBM's stream computing software to support trading operations (see IBM's press release from April).
IBM is relying on its stable of HPC hardware to provide the computational horsepower needed to support large scale stream computing, but not in the way you might expect. "The general model for HPC is to take a large problem and split it up into pieces. In stream computing we're organizing the computation in quite a different way," says Halim.
According to him, many stream computing applications can be organized as a pipeline, subdividing supercomputers into pools of processors that each deal with the needs of a specific stage of the pipeline, taking the data that comes in and transforming it for further action in a subsequent stage. For example, in a voice processing application, the stages might be organized to first decrypt individual voice packets, assemble packets into a conversation, convert the conversation to text, and then analyze the text looking for key phrases of interest that might alert a human or spark additional action and analyses. Depending upon the amount of voice information coming in, you might need 10s, 100s, or 1,000s of processors to handle the load.
But where Halim's team is really focused is on the software infrastructure needed to address stream computing needs in a universal way. The goal is to provide a general-purpose model of creating a stream application from individual data processing components that can be assembled to produce the desired results. The stream environment needs to be able to adapt to the information it is seeing, allowing it to focus on areas of interest and rapidly move past uninteresting features or trends. The environment also must be able to adapt when the user's needs change, and react to changes in the resources (both human and computer) available to work on the problem.
Importantly, IBM is designing the stream infrastructure to be useful to non-experts from the ground up, which would be a welcome change from much of the software that is written for supercomputers.
One facet of this strategy is that the environment can run applications on resources varying from laptops to supercomputers, automatically taking advantage of the computational attributes of the hardware available to it, and with the ability to schedule tasks around hardware failures. The stream software environment also includes composable generic components (e.g., join operators, dominant contributor in an array, change point detection, and so on) that will make the system useful right out of the box, allowing non-experts to do useful work with a short learning curve.
And out of the box it will come. Although this is still a research project and there remains much work to be done before the product is shrink wrapped, Halim and his team are motivated by an expansive view that "stream computing is not just a new computing model, it is a new scientific instrument."
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.