Visit additional Tabor Communication Publications
October 12, 2007
Here's a collection of highlights, selected totally subjectively, from this week's HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
President threatens funding for NSF, others in CJS bill;
Liquid Computing to Support AMD Quad Core Processors;
IBM pays others to say HP equipment is loud;
Power6 coming to a blade near you in February;
SGI engineers liquid cooling solution for ICE;
Sun launches servers based on its new T2 chips;
Panasas re-engineers RAID;
Next generation GPU market: "clash of the sub-titans";
NSF funds new science gateway research;
>>IBM puts its meters where your power is
IBM launched a new program today that allows mainframe customers to monitor their systems' precise energy consumption in real-time (press release at http://www-03.ibm.com/press/us/en/pressrelease/22433.wss):
Here's how the metering system works: the new IBM solution monitors a mainframe's actual energy and cooling statistics (collected by internal sensors); and presents them in real time on the System Activity Display. With this system, a user can now correlate the energy consumed with work actually performed.
I'd like to see this make its way into our gear as well. This next part of the press release was potentially very interesting from a privacy standpoint:
IBM will also begin publishing typical energy consumption data for the IBM System z9 mainframe. The data is derived from actual field measurements of approximately 1,000 customer machines, determining average watts/hour consumed which can be used to calculate watts per unit -- similar to automobile miles per gallon estimates and appliance kilowatt per year ratings.
The data collected for August and September determines that typical energy use can be normally 60 percent of the "label" or maximum rating for the model of mainframe measures.
I assume all those customers whose data got sent back to the mothership ok'd this use; after all, when was the last time a big company ran off with customers' data and used it without their permission?
>>MorphMPI: moving MPI apps around without relinking
Those of you who write parallel applications know full well that, while using MPI to divide and conquer in your apps means that your code will compile and run just about anywhere, you cannot simply move your application from one MPI implementation to another, even on the same machine.
This is because MPI specifies an application programming interface (API), not a binary interface (ABI). Switching to a new machine, or another launching mechanism on the same machine, requires the user to relink the application to the new MPI library.
ClusterMonkey (http://www.clustermonkey.net//content/view/213/32/) has an interesting piece on the value of adopting an application binary interface as a mechanism to stimulate ISV development:
Through striving for optimal performance, the MPI standard reduces portability, however. The MPI-standard forces applications to be launched using the same MPI-implementation as the one they were compiled against. This is no problem when the application is compiled and launched on the same machine. However this is a severe constraint for shrink-wrapped software.
Shrink-wrapped software is only available on a limited number of platforms which are selected by the application-developer. The HPC-world, in which MPI is mainly used, consists of many diverse platforms in contrast to the many X86 mainstream applications (e.g. office applications).
The rest of the article walks though a discussion of an implementation of an ABI for MPI called MorphMPI. MorphMPI is available right now under the LGPL and is hosted on sourceforge.
>>IBM, Google Bring Internet-Scale Computing to the Students
Intel is not the only big IT company trying to push parallel computing into our universities. This week, IBM and Google announced an initiative to add large-scale distributed computing courses to college curricula:
For this project, the two companies have dedicated a large cluster of several hundred computers (a combination of Google machines and IBM BladeCenter and System x servers) that is planned to grow to more than 1,600 processors. Students will access the cluster via the Internet to test their parallel programming course projects.
The press release is at http://www-03.ibm.com/press/us/en/pressrelease/22414.wss, with more summary from insideHPC at http://insidehpc.com/2007/10/08/google-and-ibm-anounce-program-to-train-next-generation-of-parallel-specialists/.
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.