Here’s a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
Sandia’s Red Storm slated to more than double capability;
Ranger’s 504 TFLOPS live on the TeraGrid;
Intel sheds light on Tukwila at ISSCC;
Sun’s Rock processor: confirmed delay into 2H 2009;
SGI integrates eXludus into life sciences offering;
Cray offering Moab from Cluster Resources;
UIUC, LSU, NVIDIA team up to advance GPGPUs;
SGI posts Q2 financials, reports loss;
It seems that the recent financial downturn has taken its toll on accelerator specialist ClearSpeed.
Following decreased spending from the financial services market, ClearSpeed is being forced to downsize its workforce. Large cuts occurred in the marketing department with smaller cutbacks in sales, finance, administration and engineering. From coverage at The Register:
One source reckoned that ClearSpeed dropped about 40 per cent of its staff, including consultants. But CEO Tom Beese told us the figure is “not quite as high as that.”
The company is now operating at 75 employees. Beese is anticipating a continued slow down in business, driven primarily by the credit crunch in the finance sector.
Read the full article at http://www.theregister.co.uk/2008/02/04/layoffs_clearspeed/.
>>Liquid Computing Board Ousts CEO
Apparently, the board of directors at Liquid Computing did not see eye-to-eye with co-founder and CEO Brian Hurley. They have asked him to leave in favor of appointing Greg McElheran from Axis Capital acting CEO. A Liquid spokesperson attributed the change to the board of directors deciding to take the company in “another direction.” This is rumored to include expanding their business beyond high performance computing.
Liquid has received a total of $41 million in two rounds of funding from an investment group made up of Axis Capital, Newbury Ventures, VenGrowth, ATA Ventures, Business Development Bank of Canada, Export Development Canada and Adam Chowaniec.
Read the full article at http://www.informationweek.com/blog/main/archives/2008/02/liquid_computin.html.
>>IBM dreaming of a Blue Gene to host the Internet
From an IBM TJ Watson research paper on what else you could do with a Blue Gene (http://weather.ou.edu/~apw/projects/kittyhawk/kittyhawk.pdf):
Project Kittyhawk’s goal is to explore the construction and implications of a global-scale shared computer capable of hosting the entire Internet as an application.
Ashlee Vance at The Register (http://www.theregister.co.uk/2008/02/05/ibm_bluegene_web/) ran across the IBM research paper on project Kittyhawk, an initiative to hammer the Blue Gene/P into a system that can host Web applications and built to a scale that could host the entire internet. Nicholas Carr also picked up the thread and commented on the paper (http://www.roughtype.com/archives/2008/02/one_computer_to.php).
The project addresses what IBM sees as fundamental flaws in the “Google” model of hosting internet-scale applications on commodity PCs:
At present, almost all of the companies operating at web-scale are using clusters of commodity computers, an approach that we postulate is akin to building a power plant from a collection of portable generators. That is, commodity computers were never designed to be efficient at scale, so while each server seems like a low-price part in isolation, the cluster in aggregate is expensive to purchase, power and cool in addition to being failure-prone.
According to the paper early results are “promising.” So why do you care? Well, it’s the whole internet for crying out loud.
But if that isn’t enough, then consider this. Let’s assume it isn’t the whole internet, but it is really really big. Lots of companies would use it (or it wouldn’t be really really big). As companies move their hosted web-facing applications to this environment, they will have a built-in solution provider for their SMB HPC needs. Since this machine started as a scientific supercomputer, a system like this could also serve as a nucleation site for the condensation of small and medium-scale technical HPC requirements as well.
A problem with the Google cluster is that using it for technical HPC typically means re-imagining the scientific applications that need to run on it. Not that this is a bad thing — the community needs to re-imagine its applications for million-core computers anyway — but we haven’t had the impetus to do it yet.