Visit additional Tabor Communication Publications
June 15, 2007
Last week's acquisition of PeakStream by Google is still reverberating in the tech world. IT watchers have offered various explanations as to why the Internet giant bought a tiny company that develops stream computing technology for high performance, multicore processors. I chimed in with my own speculations last week. Theories for the acquisition usually revolved around Google's use of multicore technology to expand their Internet empire. The company's scaled-out computing infrastructure will surely be dependent on multicore hardware, so why not own some technology that exploits that kind of architecture in a novel way?
But let's face it -- even if Google uses the PeakStream stream computing technology to accelerate its own Web applications, it still seems a bit odd for a company that develops Internet software to be interested in owning a particular development platform. Then again, Google is an unusual IT company. Even though its main products are search engines, multimedia aggregators and web tools, Google also builds and maintains its own cyberinfrastructure. Rather than buying systems from cluster vendors, the company rolls its own from commodity x86 boards and Ethernet components (although of late it has become more secretive about this). So far this approach has worked out well for the Internet giant. It boasts one of the most efficient and robust distributed computing environments in the world. The inclusion of PeakStream may be just another manifestation of Google's inclination to control the means to its Internet ends.
However, a more logical buyer of a PeakStream's multicore programming platform would have been Microsoft. Now that's a company with a direct interest in multicore software technology. Essentially all of the processor targets for Microsoft software are now multicore. And not only does Microsoft sell its own software development platforms, it also writes operating systems and applications. The hardware-agnostic PeakStream technology would appear to be a perfect fit for a software company that wants to incorporate multicore technology at every level of its offerings. The fact that Microsoft is now in the HPC business would have made a PeakStream acquisition that much more logical. If I were in a cynical mood, I might suggest that Google spirited away PeakStream to prevent Microsoft from getting it.
One thing is certain. The PeakStream acquisition focused some attention on their former rival, RapidMind Inc., a company that offers a very similar type of stream computing platform. RapidMind's product debuted in May, seven months after PeakStream delivered its first version. Ray DePaul, president and CEO of RapidMind, talked to me about his thoughts on the Google-PeakStream deal and what it might mean to his company.
Naturally, he was pleased that stream computing was getting some free publicity because of the acquisition. "This is a real validation of what's possible with this type of technology," said DePaul. "Anyone who looks at Google as a threat, a mentor or a technology leader should be a little concerned that they just leapfrogged everyone yet again."
But according to DePaul, RapidMind was less of a direct competitor with PeakStream than what has been portrayed in the press. He maintains that their customer base and plans for their product evolution is independent of what PeakStream was doing. Nevertheless, DePaul admitted that a handful of former PeakStream customers have already approached his company. Since RapidMind actually supports a broader array of target processors than PeakStream did, presumably the customer base can switch platforms fairly easily should they choose to do so.
DePaul maintains the real challenge for them was (and is) overcoming the resistance to using a high-level solution for performance-sensitive applications. Both PeakStream and RapidMind had to convince potential customers that their stream computing approach was the best way forward for multicore programming, not just because it was easier to use, but also because a systems approach could exploit more parallelism than a manual implementation. "Our main competitor is companies that think they can tackle the multithreaded game in the traditional way," said DePaul. "The in-house do-it-yourselfer is what we have to sell against."
RapidMind even encountered some of this resistance in their engagement with IBM, when working with the Cell processor team. But the IBMers had to be impressed when the RapidMind platform beat out the Cell programmers on a renderer application run on a Cell blade. In this case, the RapidMind-generated solution was able to double the performance compared to a hand-coded version. If RapidMind is able to maintain that kind of performance edge across an array of applications and processor targets, users should flock to the company's platform.
As far as becoming an acquisition target themselves, DePaul expressed that he has no interest in going down that path. His goal is to support as many platforms and applications as possible with the RapidMind offering. Currently over a thousand developers are using their product and a number of firms are looking into licensing the RapidMind technology.
Said DePaul: "I'm focused on building a company, not getting acquired by somebody in the Valley."
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - June 14, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.