Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
July 30, 2009

The Week in Review

by John E. West

Here is a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at and HPCwire.

10 words and a link

CloudCamp Munich 2009

NTU’s iDataPlex helps students learn about financial crisis

Sandia advances Internet-scale research with one million Linux kernels cluster

Fultheim on the parallels among network, storage and server virtualization

Convey Computer closes new funding, gears up to ship

Voltaire tries to build 10GbE ecosystem

Parallel programming video series

Honey bees and supercomputers

PNNL commissions 160TF supercomputer

60 companies slated for NVIDIA emerging companies conference

MPI-2.2 standard finalized

Measuring the value: cloud computing testbeds

SGI CEO asserts SGI’s continuing commitment to Itanium solutions

Responding to reports in the media lately (HPCwire and eWeek, for example) wondering aloud when SGI would finally axe the Itanium from its roadmap, SGI CEO Mark Barrenechea said “not gonna do it” on his blog last week:

…There is, however, a basic point that needs a bit more emphasis. SGI is 100% committed to Itanium.

I’ll say this for him: he’s consistent. And this really isn’t surprising given the installed base of customers and given that those customers have been through some very tough times with SGI lately. When insideHPC interviewed Barrenechea soon after he took over the corner office at SGI he was saying the same thing:

At the same time Barrenechea was at pains to explain that SGI is fully committed to the current generation of shared memory products based on Itanium, and said that although there isn’t a final decision the company’s current thinking is that SGI will continue to offer shared memory systems based on both chips.

But two can play the consistency game, and I remain firmly committed to my opinion that Itanium does not have a long term future in a commercially-viable product at SGI.

In round numbers the Itanium/shared memory combo adds a 100 percent price premium over distributed memory clusters — with software (VM, etc.) solutions viable for at least some of those customers needing large, globally addressable memory, the portion of the market interested in purchasing this hardware is growing ever smaller. When SGI introduces its Xeon shared memory platform (codenamed Ultraviolet) at a price premium closer to 25 percent (according to company insiders) over distributed memory clusters they are going to eliminate that market.

I’ve given this qualification before, and I think its good to revisit it from time to time — I don’t run a multi-million dollar international hardware company, so what do I know?

CCC announces network science and engineering research agenda

The Computing Community Consortium announced last week that they’ve released their Network Science & Engineering (NetSE) Research Agenda:

Over the past forty years, computer networks, and especially the Internet, have gone from research curiosity to fundamental infrastructure. However, this is no time to rest on the successes of the past. To meet society’s future requirements and expectations the Internet will need to be better: more secure, more accessible, more predictable and more reliable.

The intended audiences for the report include members of the computing research community, funding agencies, and policymakers. The report provides a framework or context within which various targeted research agendas can be moved forward by their communities. The report is your document (literally hundreds have contributed to it in various ways), and it is a living document — comments are earnestly solicited, as indicated on
CCC’s NetSE activity web page.

HPC in the Stimulus

As we’ve noted before, the American Recovery and Reinvestment Act (aka the Stimulus Act) signed into law in the US by President Obama in February of this year has a mostly indirect impact on HPC. Not much funding directly for the industry or for the deployment of new computers, but lots of funding for projects that will need HPC.

Case in point: the NIH got more than $10B of stimulus funding, $923,000 of which is going to Marylyn Ritchie who directs the Computational Genomics Core at Vanderbilt University Medical Center.

Ritchie, an associate professor of molecular physiology and biophysics, seeks to determine the connections between genetic and environmental factors that contribute to common, complex diseases like diabetes.

The goal of her project, which began July 1, “is to develop a way to integrate genetic data with other types of knowledge and with public databases,” she said.

Supercomputers must be programmed to analyze data in ways that reveal the greatest amount of significant information. But it’s going to take time to find the best route.

“My approach to analysis is to look at the whole genome in an unbiased way,” said Ritchie, whose group is using the University’s supercomputer, ACCRE (Advanced Computing Center for Research and Education).


John West is part of the team that summarizes the headlines in HPC news every day at You can contact him at