July 23, 2009

The Week in Review

by John E. West

Here is a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.

10 words and a link

The technology behind Convey, and the thinking that created it

PGI’s v9 compiler aims for straightforward GPU+CPU programming

Flash to the past with mob supercomputing

AMD reports Q2, loses money

IBM cost cutting leads Q2 profit while hardware sales slump

IBM announces POWER7 upgrade path, preserve your POWER6 s/n

NVIDIA releases CUDA 2.3

LSI buys NAS provider ONStor

HP consolidates storage strategy with IBRIX acquistion

Yup, Sun shareholders approved the Oracle acquisition

Cisco cuts 600 to 700 jobs at the HQ

HPC helps model the threat to hazmat transport

FAQ about using clouds to process large data

Seen on Twitter

ckittel : Conversation with @kphutt — A crayfish is not a fish, so it must be a supercomputer. As such, it should be able to run Linux.

Stay up-to-the-minute with all the news you need to know from around the supercomputing community. Follow insideHPC and HPCwire on Twitter!

HP aggressively targets Sun customers

HP issued a press release in the past week about its “Sun Complete Care Program.” This is a direct shot at the Sun hardware business, and offers current Sun customers free migration and consulting if they consider making the switch. The program includes deep discounts on hardware and software for switchers, and financing incentives, in addition to all the free help making the switch.

More than 100 customers have chosen to migrate to HP server and storage platforms over the last six months to significantly improve the return on their investments. On average, Sun customers are paying up to 80 percent more in total cost of ownership (TCO) for Sun SPARC servers than customers using HP Integrity servers. Many applications are also significantly more expensive to run on SPARC servers compared with HP servers. It costs 50 percent more per core to run Oracle’s database software on SPARC than on HP Integrity.

What’s interesting here is that HP was rumored to have been in the Oracle/Sun deal at the beginning, with Oracle buying the software and HP buying the hardware portion. When HP dropped out, Oracle stepped up for the whole deal, but HP is rumored to have retained an option to buy Sun’s hardware business.

This release says that it may have kept that option on the books to buy and scuttle Sun’s hardware business — and to be there to pick up the pieces — at least on the server side. Or that the rumors weren’t true. Or just that HP has decided it isn’t buying anything and is just going to beat Sun the old fashioned way, one sale at a time. Lots of references in the press release to the “peace of mind” one gets from doing business with HP, the contrast that HP’s people want you to make presumably being the anxiety and night sweats one gets from doing business with Sun these days.

Google running Belgian datacenter without chillers

According to an article by Rich Miller over at Data Center Knowledge, Google has opened a datacenter in Belgium that runs without chillers. Outside air economization (for a recent example see this article about Pete Beckman’s work at ANL) is not uncommon, but Google has taken this to its logical conclusion:

Rather than using chillers part-time, the company has eliminated them entirely in its data center near Saint-Ghislain, Belgium, which began operating in late 2008 and also features an on-site water purification facility that allows it to use water from a nearby industrial canal rather than a municipal water utility.

The climate in Belgium will support free cooling almost year-round, according to Google engineers, with temperatures rising above the acceptable range for free cooling about seven days per year on average. The average temperature in Brussels during summer reaches 66 to 71 degrees, while Google maintains its data centers at temperatures above 80 degrees.

What happens if it gets hot in Belgium? In that case the advantages of being the largest computing provider on the planet become evident:

On those days, Google says it will turn off equipment as needed in Belgium and shift computing load to other data centers. This approach is made possible by the scope of the company’s global network of data centers, which provide the ability to shift an entire data center’s workload to other facilities.

This is a remarkable feat of software engineering.

“You have to have integration with everything right from the chillers down all the way to the CPU,” said Gill, Google’s Senior Manager of Engineering and Architecture. “Sometimes, there’s a temperature excursion, and you might want to do a quick load-shedding to prevent a temperature excursion because, hey, you have a data center with no chillers. You want to move some load off. You want to cut some CPUs and some of the processes in RAM.”

We could actually do this with supercomputing services on a national scale in the US if we were to decide that, and then behave as if HPC is a strategic resource that deserves to be managed with a single coherent strategy. The US is large and geographically diverse enough to build datacenters around the country in areas of beneficial climate near cheap and/or environmentally friendly power sources. The US has ample funding already headed into HPC (not that we couldn’t blow your socks off with more, mind you), but still lacks the will to do focus the investments for maximum benefit.

Of course this would involve datacenter owners and mangers giving up on the notion of being collocated with their machines. This is a tradition that is rapidly crossing out of the realm of quaint expression of an owner’s prerogative to give VIP tours and into the realm of misuse of financial, energy, and natural resources.

IBM lines up against UCS with Juniper partnership

As reported by Stacey Higginbotham at GigaOM, IBM is responding to Cisco’s move into computing with a partnership that will put it in datacenter networking, creating a(nother) one stop datacenter shop:

IBM said today it will resell switches and routers made by Juniper under the IBM brand to compliment Big Blue’s server products aimed at data centers. The move is a direct response to Cisco’s creation of its own brand of servers it calls the Unified Computing System, as well as efforts by Hewlett-Packard to bring that company’s ProCurve networking gear closer to its servers. They’re all part of a larger attempt to keep pushing the boundaries of virtualization beyond hardware and into the network itself.

…Today’s agreement deepens the partnership that Juniper has with IBM, and mimics relationships that Juniper has with certain carrier equipment makers to resell its products without the Juniper brand name. This is the first time, however, that the company has created such a relationship to get its products inside data centers. Juniper entered the data center market in 2005 with its purchase of NetScreen, and has since fought to take market share away from Cisco and HP. With this deal Juniper has the potential to boost sales, as IBM will get Juniper’s networking gear in front of a lot of new customers.

More in Stacey’s article.


John West is part of the team that summarizes the headlines in HPC news every day at insideHPC.com. You can contact him at john@insidehpc.com.

Share This