Here is a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.
10 words and a link
New SGI reports continued losses in preliminary Q2 results
SGI announces scalable workgroup cluster
Blue Collar Computing pioneer heads to RENCI
LAPACK on CUDA beta available for free
Emerson datacenter sports largest solar array in Missouri
Sun releases HPC Software, Linux Edition 2.0
Still no word about where SiCortex’s assets went
Statistics with R on your favorite supercomputer
Fujitsu super “RIKEN” enters production in Japan
Supermicro debuts 4TF GPU-based personal super
EPEAT electronics ratings go global, but no HPC yet
Berkeley pockets $62M for 100Gbps Ethernet research
Fixstars revs HPC cluster suite for Sony Playstation
For the datacenter, efficient trumps pretty
Microsoft’s Dan Reed has a post this week about the effectiveness with which datacenters make use of the energy they consume for the end goal: powering computers. Dan’s post is specifically about the PUE metric:
Many legacy data centers — those built more than a few years ago — have PUEs in excess of two, or even three. This is largely due to inefficient computer room air-conditioning (CRAC) units, lack of hot and cold aisles, energy losses due to multiple (unnecessary) voltage conversions and aging or inappropriate building designs.
Today, state of the art data centers have PUEs below 1.5, and there are new designs that could approach a PUE of one by reducing UPS support where appropriate, operating at substantially higher temperatures and exploiting ambient cooling. Many people do not realize that computing hardware is much more resilient to high temperature than history and practice would suggest. It need not be chilled to temperatures suitable for polar bears.
Indeed, Dan goes on to mention Christian Belady’s experiment last year to run a set of servers in a tent, outside, successfully, for six months.
But then Dan goes off into an area that I think is very relevant to the shift that must happen in the minds of HPC datacenter owners. Not the managers, or the people who work in the center, but the owners: the people who write the checks and want to give the tours. Unless you live near cheap, reliable power in an area of the country where you can take advantage of outside air economization, you should probably NOT have a big datacenter anywhere near you. This means many things, including no machine room to tour when VIPs visit you.
This is an outstanding opportunity: it means that you can focus your money on building datacenters that work best for the machines even if they are ugly, and it means that with visitors you’ll have every reason to talk about what’s really important in supercomputing: the people with the expertise to make the machines sit up and do useful work (sys admins, architects, programmers, and more). Any rube with $50M can buy a big machine, but it takes talent to use one effectively. That’s differentiation that matters.
Finally, I would be remiss if I did not opine on the most obvious, visual difference between cloud data centers and high-performance computing (HPC) facilities. The former are designed for function, not appearance. They are usually nondescript facilities optimized for efficient hardware operation at large scale, not for human accessibility or for comfort. Indeed, container-based data centers look more like a warehouse and distribution center with parking and utility connections than Hollywood’s idea of a computing center. Conversely, HPC facilities are usually showpieces with signs, elegant packaging and lighted spaces suitable for tours by visiting dignitaries.
At large scale, efficient trumps pretty. It’s all about what one measures.
Intel’s Facebook-based parallel computing project
This week Intel announced the beta release of a new application for Facebook that lets users donate their unused cycles to further research through computing tasks. Think SETI@home for the the social networking crowd. From the article at ComputerWeekly:
Intel’s Facebook peer-to-peer application, Progress Thru Processors, allows users to donate their PCs’ unused processor power to research projects such as Rosetta@home, which uses the additional computing power to help find cures for cancer and other diseases such as HIV and Alzheimer’s.
In addition to Rosetta@home, Progress Thru Processors participants can choose to contribute excess processor computing power to the research efforts of Climateprediction.net and Africa@home.
Facebook users can find the application at Progress Thru Processors on Facebook.
PGI Visual Fortran adds MPI debugging to Windows suite
Compiler maker The Portland Group announced late last week that the latest version of its Visual Fortran product for Windows adds debugging on Windows for MS MPI:
…PVF 9.0 is the first general release to include support for the building, launching and debugging of Microsoft MPI (MSMPI) Fortran applications from within the Microsoft Visual Studio integrated development environment.
PVF augments the Visual Studio debugger by adding a Fortran language specific custom debug engine. The PVF debug engine supports debugging of single and multi-thread, OpenMP, multi-thread MSMPI and hybrid MSMPI+OpenMP Fortran applications. It enables debugging of 64-bit or 32-bit applications using source code or assembly code, and provides full access to the registers and hardware state of the processors. Other new multi-process MSMPI capabilities in PVF 9.0 include Visual Studio property pages for configuring compile-time options, launching applications either locally on a workstation or on a distributed-memory Windows HPC Server 2008 cluster system, and debugging of programs running either locally or on a cluster.
This release also includes the PGI Accelerator framework (which we’ve written about) that lets developers target CPUs and GPUs without delving into CUDA.
—–
John West is part of the team that summarizes the headlines in HPC news every day at insideHPC.com. You can contact him at [email protected].