Here is a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.
10 words and a link
Cray announces Q4 and 2008 results, posts paper loss
SGI announces Q2, posts loss
NVIDIA announces Q4, posts loss
32nm on track at Intel but Itanium upgrade delayed
SGI headed to Russian weather forecasting establishment
SiCortex wins at Karlsruhe Institute of Technology
6 ideas for dealing with the graying of HPC
Wozniak joins Fusion-IO
Concurrent Thinking updates cluster appliances
Voltaire announces 40Gb/s Director Switch
HPC seminar in Canada
DARPA’s Tether moves on
The Green Grid launches new datacenter metrics
Last week The Green Grid announced that it is launching new methods [PDF] for measuring and reporting energy efficiency and datacenter productivity. Also of interest are potential measurements of useful work in datacenters, the “Proxies for Estimating Data Center Productivity,” which are being discussed by the organization (public comment welcome).
Several white papers related to this week’s announcement are now available:
PUE Scalability — This white paper explains how to use data collected from energy consumption to produce both statistical analyses, and introduces a new metric called PUE Scalability to better assess how well a facility’s total power consumption scales with changes in IT power consumption.
Proxies for Estimating Data Center Productivity — This paper outlines eight possible methods for measurement of useful work in the data center, and the public will have the opportunity to offer feedback on which specific method they prefer.
Using Virtualization To Improve Data Center Efficiency –This paper outlines some of the advantages, considerations, processes, and implementation strategies needed to reduce server power consumption in a data center using virtualization techniques.
Germany goes petaflops
Forschungszentrum Juelich and IBM have a press release announcing that the Gauss Center for Supercomputing (GCS) (a co-orperation between the German national HPC centers – FZJ, HLRS at Stuttgart, and LRZ at Garching) will install a petaflops supercomputer later this year. The super will be an IBM BlueGene/P system, similar to the existing 222 teraflops “JUGENE” BG/P at FZJ. The 72 racks of the new (as yet un-named) system will consume only 2,200 kilowatts.
This latest announcement, on top of other recent supercomputer announcements (e.g., a 200 teraflops Bull supercomputer) from FZJ, shows the understanding of the value of leadership-scale supercomputing in the German funding bodies, and helps to confirm Germany, GCS and FZJ as leaders of the European HPC scene.
FZJ is also central to the European PRACE project, which is working to deploy a Europe-wide Petascale supercomputing service in the next year or two.
Details on Intel’s 8 core chip
Pointer from Multicoreinfo.com to an article at Ars Technica providing a peek at some of the details of Intel’s forthcoming 8-core chip:
At an ISSCC session Monday, Intel went into new detail on its forthcoming 8-core, 16-thread Xeon processor, a 64-bit processor that’s a member of the Nehalem family. Much of the session was focused on the packaging and power aspects of the device, so I’ll recap some of the more interesting parts of that here.
I particularly enjoyed reading this bit:
One major part of the Xeon presentation is Intel’s “cache and core recovery” scheme, which lets the company salvage a usable part from a defective chip by disabling the defective regions and selling the chip with a lower core count or cache amount.
So for instance, if testing and validation finds a defect in a cache slice on a chip, then Intel can disable that slice and sell the chip with lower cache. And likewise with cores, so that you might buy a six-core chip from Intel that was originally produced as an 8-core Xeon but had two defective cores.
More (much) in this interesting article. Recommended read.
Using virtualization to make many servers look like one super
ScaleMP founder and CEO Shai Fultheim has an article at the Virtualization Journal extolling the benefits of using server virtualization to aggregate many servers into one, rather than the usual one into many approach used in the enterprise:
There is a new emerging, third kind of computing virtualization: high-end virtualization in which multiple physical systems appear to function as a single logical system. This virtualization paradigm is known as aggregation and it is basically the opposite of partitioning. The building blocks of this approach are the same x86 industry standard servers used in the scale-out (clustering) approach, preserving the low cost. In addition, by running a single logical system, customers manage a single operating system, and take advantage of large contiguous memory and unified I/O architecture.
—–
John West is part of the team that summarizes the headlines in HPC news every day at insideHPC.com. You can contact him at [email protected].