Here’s a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
SiCortex announces new deskside, responds to CX1;
NY investment to create High Performance Computation Consortium;
HPC career outlook: share with an undergrad you know;
Congress says DARPA can’t “responsibly manage” requested FY09 increase;
Video tour of new Intel datacenter;
TACC Ranger video tour;
MPI standards update first since 1997, more coming;
Star-P coming to Windows;
Craig Mundie outlines the future of computing;
Integrating CUDA and Visual C++;
Sun releases GridEngine beginner’s guide;
Stratus markets high availability solution to Win HPC audience;
Russian Top50 HPC list out;
Allinea releases debugging tool for MS Visual Studio;
>>Google datacenters: good, better, best
This just in from Twitter (hi, @datacenter), an article at Data Center Knowledge posted today with details of Google’s datacenter energy use:
Google today disclosed details of its data center energy usage, confirming that it operates some of the most efficient facilities in the world. Google said it is averaging a Power Usage Efficiency (PUE) rating of 1.21 across its six company-built data centers, and one of its facilities is operating with a PUE of 1.13, the lowest ever published and just above the “perfect” efficiency score of 1.0.
According to the article, a typical datacenter PUE is 2.0, and …”the lowest claim in our travels here at DCK is a PUE of 1.28 for Sun’s data center in Santa Clara, Calif.”
“We reduced the energy-weighted average overhead across all Google-built data centers to 21% versus the average of 96% reported by the EPA,” Google says in a new section of its web site dedicated to data center efficiency. “In other words, compared to standard data centers we’ve reduced the overhead by more than fourfold. To our knowledge, no other large-scale production data center has ever operated as efficiently. In fact, one of our data centers is running at an even lower overhead of only 15%, a sixfold improvement in efficiency.”
Interesting, interesting article with a peak inside a traditionally very secretive tech company. If you are responsible for a datacenter, you need to read it.
>>Tech on display at the High Productivity HPC event in Houston
This week Cluster Resources announced their participation in High Productivity HPC, Oct. 2 at PCPC Direct’s headquarters in Houston, Texas. Other partners in the event include HP, Microsoft, and PCPC Direct – a provider of high-performance technology solutions in specialized sectors like oil and gas.
Cluster Resources will demonstrate the Moab Hybrid Cluster — an HPC solution that dynamically changes cluster servers between Linux and Windows based on workload, defined policies, and application needs — on PCPC’s hybrid boot cluster.
“The Moab Hybrid Cluster solution overcomes the capacity-planning problem inherent when estimating the number of servers allocated to different OS resource pools,” stated Michael Jackson, president of Cluster Resources. “Traditionally, these static resource pools have different peak usage times, where one OS remains idle while another has a backlog of workload. The hybrid model breaks down OS resource silos, letting OS pools grow and shrink dynamically to take advantage of otherwise idle compute resources. It also intelligently overcomes hardware and job failures by reallocating compute nodes with the proper OS to compensate for the failures.”
HP will also be showing its workgroup system, about which we haven’t heard in a while:
High Productivity HPC will also showcase the affordable, powerful, and easy-to-use HP Cluster Platform Workgroup System (CPWS). This HPC cluster solution is based on the compact HP BladeSystem c3000 enclosure, which is designed for smaller technology sites, branch offices, and remote locations. Powered by industry-leading HP BladeSystem c-Class servers, CPWS is easily configured, ordered, and deployed without special power or cooling. This HPC platform enables midsize customers to drive new levels of innovation and productivity while still managing overhead costs.
Admission to the event is free; for info or registration email [email protected].
>>New Red Hat Linux for HPC
Linux Today carried news this week of a forthcoming HPC distro of everyone’s favorite OS. This distro marks a change from the previous Red Hat approach to HPC, as noted by the Inquirer:
The current offerings, Red Hat Enterprise Linux HPC Compute Node subscription, are based on the standard distro but tailored for compute nodes running HPC workloads. The subscription is available for HPC compute nodes used in clusters with four or more systems. Users access compute nodes via head node servers, which of course use the more expensive Red Hat Enterprise Linux ES or AS.
Instead, the upcoming Red Hat HPC is actually a fully integrated software stack using combined technologies from Red Hat and Platform Computing.
Red Hat released the HPC solution on Oct. 2 (five days sooner than was anticipated by Linux Today). From the announcement:
The Red Hat HPC Solution incorporates all of the components necessary to deploy and maintain HPC clusters, including Red Hat Enterprise Linux 5.2, the world’s leading open source operating system, and Platform Computing’s cluster software framework, Platform Open Cluster Stack 5. The solution also includes device drivers, a simple cluster installer, cluster management tools, a resource and application monitor, interconnect support and Platform Lava, a powerful job scheduler.
The integration with Platform follows up on the announcement made last November that the two companies would start working more closely together in HPC.