Visit additional Tabor Communication Publications
October 13, 2006
Last Wednesday I got a call from Kai Staats, the CEO of Terra Soft, who told me that his company is developing a supercomputer based on Sony PlayStations. According to him, they'll be creating two clusters from a bunch of the Cell-processor-based PlayStation machines. "Really," I thought, "a cluster of PlayStations?" As I tried to visualize what this might look like, Staats explained the story:
"They are not actually the PlayStations you see at Wal-Mart. They're the 2U rackmount beta units that were used during the last two years by all the game developers. When the developers send them back to Sony, we get them. There are a thousand of them out there and we get 480 of them."
Sony Corporation paid Terra Soft to port their Yellow Dog Linux OS to the PlayStation target. The other components of the cluster software include Y-HPC v2.0, a commercial, cross-architecture Linux cluster construction suite and the Moab cluster management suite by Cluster Resources. Terra Soft's Y-Bio bioinformatics suite is being optimized for the Cell processor.
The Terra Soft announcement said that two clusters -- a test system and a production system -- will be constructed and housed in a facility near the company's headquarters in Loveland, Colorado. According to Staats, the systems will be built by the end of November and will represent the first supercomputers based solely on the Cell-processor.
"It's the most awesome project we've ever done," said Staats.
This doesn't mean that Sony is entering the supercomputing business. Staats said they basically wanted to demonstrate the power and versatility of the PlayStation and the Cell processor. In addition, the production cluster built by Terra Soft is intended to be used for real-world research. Lawrence Berkeley National Lab, Oak Ridge and Los Alamos and some premier universities were invited to use the clusters for bioinformatics research and development.
A 12-unit mini-version of this cluster will be showcased at next month's Supercomputing Conference in Tampa, Florida. The demo will take place in the IBM booth and will run the Y-Bio gene sequencing application.
The whole idea of reusing the PlayStation beta units got me to thinking. I wonder how hard it would be to build Cell-based clusters from discarded Sony PlayStations. Game enthusiasts upgrade their machines more often than they upgrade their wardrobe, promising a steady supply of raw hardware. Now that Terra Soft has assembled the system software, if some enterprising person could just figure out a relatively inexpensive way to recycle the guts of discarded PlayStations, they might be able to create an interesting little supercomputing business.
"We're all petaphiles now, plugged into a world of petabytes, petaops, petaflops."
So writes George Gilder in an article published in Wired Magazine this past week. Gilder, the publisher of the Gilder Technology Report, talks about the effect of petascale computing on the Internet and how different rates of technological advancements are interacting to favor the reestablishment of centralized computing at the expense of the personal computer. Enormous data centers are being built to feed our growing appetite for computing and information.
As Moore's Law loses ground to the much more rapid advancements in storage capacity and communication bandwidth, processing data becomes more expensive relative to storing data and moving it around. According to Gilder this favors locating the computing infrastructure closer to cheaper power sources so that the scarce and energy-hungry CPU and memory resources can be used more efficiently.
Says Gilder: "In the PC era, the winners were companies that dominated the microcosm of the silicon chip. The new age of petacomputing will be ruled by the masters of the remote data center - those who optimally manage processing power, electricity, bandwidth, storage, and location."
We are certainly seeing desktop applications move on to the Internet. Just this week, Google upgraded it Web-based application suite with "Docs and Spreadsheets," joining Gmail, Google Maps, Calendar and several others. This new service not only allows you to circumvent the PC for word processing and spreadsheet development, but also enables remote collaboration with other users. Since you only need a thin browser client to do this, the traditional PC -- with its vulnerable and limited-capacity local disk, out-of-date applications and bloated OS -- becomes superfluous.
Companies like Google, Yahoo, eBay, Amazon and others are the benefactors of this paradigm shift, while Microsoft is seen as a company that is attempting to transition from the old desktop computing model to the new data center model. Gilder points to Google, in particular, as being one of the more adept companies at exploiting this new style of computing. The company's recent establishment of a 30 acre server farm on the banks of the Columbia River, where cheap hydroelectric power and access to a major fiber-optic hub give it an ideal environment for petascale-capacity computing. Microsoft and Yahoo are also apparently setting up shop along the Columbia, and Ask.com is scouting the area as well.
Not one to be overly sanguine about the rise of the data center, Gilder believes we will eventually return to the decentralized model of computing. He suggests that the technology balance that is currently eroding desktop computing will reverse when semiconductor technology takes another big leap forward -- along the lines of what Intel has recently proposed with their terascale chip, merging silicon optics with large numbers of simple computing cores.
Says Gilder: "The advantages of the new architecture may last only until the centripetal forces pulling intelligence to the core of the network give way, once again, to the silicon centrifuge dispelling it to the edges. Google has pioneered the miracle play of wringing supercomputer performance from commodity CPUs, and this strategy is likely to succeed as long as microchip progress remains in the doldrums."
Hmm... maybe. It's hard for me to imagine that the problems of desktop computing will be overcome with some buffed-out silicon. The heavy burden associated with local maintenance of software and data seems like a much bigger hurdle than adding more CPU muscle. And Gilder may also be underestimating the ability of software developers to use up ever greater amounts processing power as the hardware become available.
In developed economies, services tend to centralize as they mature to take advantage of the efficiencies of specialization. Things like power generation, food production/processing, media broadcasting have all become centralized to one degree or another. I can't think of any compelling reason why computing would be different.
I expect to be hearing about the DARPA High Productivity Computing Systems (HPCS) funding decision any day now. On the other hand, I've been thinking the same thing since July. The HPCS program has been stuck in neutral since the summer as the feds have procrastinated about making the vendor selections for the third and final phase of the program. Cray, IBM, and Sun Microsystems are the three candidates under consideration for developing a petascale supercomputer architecture under the direction of the HPCS program.
Since the 2007 Defense Appropriations Budget was signed into law in late September, DARPA now knows how much money is available for their myriad programs over the next fiscal year. The agency has a total of around $3 billion to play with in FY07, which is essentially unchanged from the previous year.
With that in mind, I'm guessing the HPCS vendor decision will happen by the end of this October... or by the Supercomputing Conference in mid-November... or maybe by Christmas. Obviously, I have no idea. If you think you do, write to me and give me your best guess. While you're at it, include who you think will be selected for Phase III: Cray, IBM and/or Sun. Correct entries will receive my undying respect.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - October 12, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.