Here’s a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
President threatens funding for NSF, others in CJS bill;
Liquid Computing to Support AMD Quad Core Processors;
IBM pays others to say HP equipment is loud;
Power6 coming to a blade near you in February;
SGI engineers liquid cooling solution for ICE;
Sun launches servers based on its new T2 chips;
Panasas re-engineers RAID;
Next generation GPU market: “clash of the sub-titans”;
NSF funds new science gateway research;
>>IBM puts its meters where your power is
IBM launched a new program today that allows mainframe customers to monitor their systems’ precise energy consumption in real-time (press release at http://www-03.ibm.com/press/us/en/pressrelease/22433.wss):
Here’s how the metering system works: the new IBM solution monitors a mainframe’s actual energy and cooling statistics (collected by internal sensors); and presents them in real time on the System Activity Display. With this system, a user can now correlate the energy consumed with work actually performed.
I’d like to see this make its way into our gear as well. This next part of the press release was potentially very interesting from a privacy standpoint:
IBM will also begin publishing typical energy consumption data for the IBM System z9 mainframe. The data is derived from actual field measurements of approximately 1,000 customer machines, determining average watts/hour consumed which can be used to calculate watts per unit — similar to automobile miles per gallon estimates and appliance kilowatt per year ratings.
The data collected for August and September determines that typical energy use can be normally 60 percent of the “label” or maximum rating for the model of mainframe measures.
I assume all those customers whose data got sent back to the mothership ok’d this use; after all, when was the last time a big company ran off with customers’ data and used it without their permission?
>>MorphMPI: moving MPI apps around without relinking
Those of you who write parallel applications know full well that, while using MPI to divide and conquer in your apps means that your code will compile and run just about anywhere, you cannot simply move your application from one MPI implementation to another, even on the same machine.
This is because MPI specifies an application programming interface (API), not a binary interface (ABI). Switching to a new machine, or another launching mechanism on the same machine, requires the user to relink the application to the new MPI library.
ClusterMonkey (http://www.clustermonkey.net//content/view/213/32/) has an interesting piece on the value of adopting an application binary interface as a mechanism to stimulate ISV development:
Through striving for optimal performance, the MPI standard reduces portability, however. The MPI-standard forces applications to be launched using the same MPI-implementation as the one they were compiled against. This is no problem when the application is compiled and launched on the same machine. However this is a severe constraint for shrink-wrapped software.
Shrink-wrapped software is only available on a limited number of platforms which are selected by the application-developer. The HPC-world, in which MPI is mainly used, consists of many diverse platforms in contrast to the many X86 mainstream applications (e.g. office applications).
The rest of the article walks though a discussion of an implementation of an ABI for MPI called MorphMPI. MorphMPI is available right now under the LGPL and is hosted on sourceforge.
>>IBM, Google Bring Internet-Scale Computing to the Students
Intel is not the only big IT company trying to push parallel computing into our universities. This week, IBM and Google announced an initiative to add large-scale distributed computing courses to college curricula:
For this project, the two companies have dedicated a large cluster of several hundred computers (a combination of Google machines and IBM BladeCenter and System x servers) that is planned to grow to more than 1,600 processors. Students will access the cluster via the Internet to test their parallel programming course projects.
The press release is at http://www-03.ibm.com/press/us/en/pressrelease/22414.wss, with more summary from insideHPC at http://insidehpc.com/2007/10/08/google-and-ibm-anounce-program-to-train-next-generation-of-parallel-specialists/.