Thanks to this week's International Supercomputing Conference (ISC) in Dresden, Germany, today's issue of HPCwire is one of the largest of the year. ISC always seems to put a charge into the HPC community and this year was no exception. Here's a rundown on some of this issue's highlights as well as some pointers to a few exceptional articles from our special ISC coverage on Wednesday and Thursday.
If you're wondering where to begin, I encourage you to start with Thomas Sterling's supercomputing retrospective of the past year — that is, June 2005 to June 2006. When you live and die for HPC, you celebrate the New Year on a slightly different schedule than the rest of the world. Thomas offers an insightful wrap-up of the most important supercomputing developments since last summer. He discusses multi-core evolution, the rising interest in heterogeneous architectures, and offers his own “State-of-the-Petaflops” address.
Also be sure to catch John Gustafson's perspective on the escalating energy crisis in supercomputing. John is the CTO of HPC at ClearSpeed, a company that knows a few things about energy-efficient high performance computing. The article is full of wit and wisdom. I highly recommend it.
Our feature interview this week is with Chuck Seitz, CEO and founder of Myricom. In talking with Chuck on many occasions over the last few months, I've come to realize that he's one of the real visionaries in HPC today. In our Q&A, Chuck describes some of the latest Myricom offerings, offers his thoughts on Microsoft's cluster computing solution, and gives us his take on the Ethernet versus Ethernot (specialty networks) debates.
If you missed Thurday's special coverage of ISC, be sure to go back and read Reiner Hartenstein's piece on Reconfigurable Computing. Hartenstein, a professor at TU Kaiserslautern in Germany, believes that FPGAs, and the paradigm they bring with them, is a game-changing model for supercomputing. He challenges system designers to alter their way of thinking about building next-generation HPC platforms. Here's an excerpt:
“… just putting more CPUs on the chip is not the way to go for very high performance. We have learned this lesson from the supercomputing community, which has paid an extremely high price for monstrous installations by following the wrong road map for decades.”
In that same ISC issue from Thursday, be sure to read our Q&A with Alexander Reinefeld, who also talks about FPGAs. He's less confrontational than Hartenstein and offers some very worthwhile insights. Don't miss the Alan Turing quote at the end of the interview.
In Wednesday's special coverage of ISC, I'd like to point out a couple of special articles.
The first one is our interview with Horst Simon. In this Q&A, he talks about the challenges involved in obtaining meaningful petaflops performance, touching on some of the issues I talked about in last week's editorial commentary. Here's an excerpt that sums up Simon's concerns about petaflops myopia:
“… if in all this enthusiasm we settle for just the easy goals, such as the first peak or Linpack petaflop performance, we may have a “petaflop before its time.” Once the peak and Linpack milestones are achieved in 2008 or so, the real hard work begins, the work of achieving petaflop performance in production computing environments.”
Also in the Wednesday ISC issue, be sure to read Stephen Wheat's perspective on the Top500 list and the associated Linpack benchmark. Wheat, the Senior Director of Intel's HPC Platform Office, argues that the HPC world has grown too complex to rely on just a single metric anymore. From his point of view, many industry vendors and end-users are also reaching this conclusion. So what will we use to take the place of Linpack? Wheat sees no easy answers, but suggests that the new metrics will have to account for the growing diversity of the high performance computing market.
News this week
Too much HPC news to talk about. Every HPC vendor and analyst had something to say this week. What follows are just some of the announcements that caught my eye.
On Monday, Intel officially released its dual-core Xeon Processor 5100 series (formally code-named Woodcrest). Most of the usual OEM suspects had already announced they would be building platforms based out the new Intel server chip. However, this week SGI announced a new Altix XE cluster line based on the 5100 series chips. This is a first for SGI, which in the past has focused almost exclusively on high-end systems for the technical computing market. The new cluster line is part of the company's new strategy to make inroads into the enterprise space. Along with their existing Itanium-based Altix lines, SGI has apparently committed itself to be an Intel-only shop, at least for the time being.
HP introduced its new blade system solution for HPC at ISC this week. The blades support a 20 Gbps InfiniBand interconnect, twice as fast as the current IBM BladeCenter cluster systems. With the InfiniBand (IB) ecosystem moving from 10 to 20 Gbps, the IB vendors look like they're trying to get some bandwidth distance between their latest offerings and 10 Gigabit Ethernet solutions, which recently have approached InfiniBand's low latency performance using a variety of clever technologies. All of these increases in interconnect performance should make cluster platforms even more attractive than they are today.
IDC seems to think so too. On Wednesday, during the company's analyst briefing at ISC, research VP Christopher Willard predicted that by 2010 cluster systems will account for 75 to 85 percent of all high performance technical computing revenue (unless a major new disruptive technology comes along). The article goes on to describe some of the other factors that will effect cluster adoption.
Here are my predictions: By 2015 clusters will become the most common form of residential and commercial heating in the U.S. And by 2020, there will so many clusters in the world, there won't be any room for IT marketing executives. I didn't do any actual research to reach these conclusions; it's just wishful thinking.
Until next week…
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].