Here’s a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
A Cray in your Speedo;
Bull to support Advanced Research Computing division at Cardiff;
One of Europe’s leading F1 teams deploys 38 TFLOPS Appro;
Verari introduces containerized computing solution;
SiCortex expands into Europe;
LSU hosts math research events;
>>Microsoft building out a “vast” array of datacenters
Nick Carr is reporting (http://www.roughtype.com/archives/2008/03/rumor_microsoft_1.php) on rumors of a “vast data-center push” by Microsoft. Nick cites an anonymous source:
The construction program will be “totally over the top,” said a person briefed on the plan. The first phase of the buildout, said the source, will include the construction of about two dozen data centers around the world, each covering about 500,000 square feet or more.
And he points out that, if true, these rumors are consistent with public comments made by MS CEO Steve Ballmer in a Financial Times inteview last week. MS wants to make sure that it’s on the short list of providers in the cloud/redshifted future; a list that today includes Amazon, with Google on the way.
>>University of Iowa Lands Grant for New Gear
University of Iowa professor of mechanical and industrial engineering, Ching-Long Lin, has received a grant totaling $473,636 from the National Institutes of Health in order to procure a new supercomputer. Eric A. Hoffman, professor of radiology at UI’s College of Medicine, professor of biomedical engineering and director of the Iowa Comprehensive Lung Imaging Center is co-investigator on the grant.
The system will be targeted at providing computational, storage and visualization resources to researchers of pulmonary flow, lung mechanics, image matching and registration, cardiovascular imaging and lung texture analysis.
Maybe one day this machine will provide a better answer why I should stop smoking my briar pipes, but until then….
Read the full article at http://www.press-citizen.com/apps/pbcs.dll/article?AID=/20080228/NEWS01/80228015/1079.
>>A case for hosted HPC: 11 million PDFs, Amazon, and MapReduce
There was a blog post a while back at the NY Times Web site (http://open.blogs.nytimes.com/2007/11/01/self-service-prorated-super-computing-fun/) from the tech guy responsible for converting the NYT content from 1851 to 2002 to PDF. He did it in under 24 hours with 100 Amazon EC2 machines, Hadoop, and some scripts.
I had been using Amazon S3 service for some time and was quite impressed. And in late 2006 I had begun playing with Amazon EC2. So the the basic idea I had was this: upload 4TB of source data into S3, write some code that would run on numerous EC2 instances to read the source data, create PDFs, and store the results back into S3. S3 would then be used to serve the PDFs to the general public. It all sounded pretty simple, and that is how I got the folks in charge to agree to such an idea — not to mention that Amazon S3/EC2 is pretty easy on the wallet.
4 TB of data in, and 1.5 TB of new PDFs out. And now NYT content back to 1851 is free. This is a really good example of why the hosted HPC part of the currently fashionable cloud computing thing may have enough legs to sustain a real business. It’s also an example of why building a successful hosted HPC business is going to be tough: you are selling cycles to people who, by definition, only need them occasionally. In order to build a business with these kinds of customers you’re going to need lots and lots of them in order to ensure revenue steady enough to keep the lights on.