Visit additional Tabor Communication Publications
June 30, 2006
Thanks to this week's International Supercomputing Conference (ISC) in Dresden, Germany, today's issue of HPCwire is one of the largest of the year. ISC always seems to put a charge into the HPC community and this year was no exception. Here's a rundown on some of this issue's highlights as well as some pointers to a few exceptional articles from our special ISC coverage on Wednesday and Thursday.
If you're wondering where to begin, I encourage you to start with Thomas Sterling's supercomputing retrospective of the past year -- that is, June 2005 to June 2006. When you live and die for HPC, you celebrate the New Year on a slightly different schedule than the rest of the world. Thomas offers an insightful wrap-up of the most important supercomputing developments since last summer. He discusses multi-core evolution, the rising interest in heterogeneous architectures, and offers his own "State-of-the-Petaflops" address.
Also be sure to catch John Gustafson's perspective on the escalating energy crisis in supercomputing. John is the CTO of HPC at ClearSpeed, a company that knows a few things about energy-efficient high performance computing. The article is full of wit and wisdom. I highly recommend it.
Our feature interview this week is with Chuck Seitz, CEO and founder of Myricom. In talking with Chuck on many occasions over the last few months, I've come to realize that he's one of the real visionaries in HPC today. In our Q&A, Chuck describes some of the latest Myricom offerings, offers his thoughts on Microsoft's cluster computing solution, and gives us his take on the Ethernet versus Ethernot (specialty networks) debates.
If you missed Thurday's special coverage of ISC, be sure to go back and read Reiner Hartenstein's piece on Reconfigurable Computing. Hartenstein, a professor at TU Kaiserslautern in Germany, believes that FPGAs, and the paradigm they bring with them, is a game-changing model for supercomputing. He challenges system designers to alter their way of thinking about building next-generation HPC platforms. Here's an excerpt:
"... just putting more CPUs on the chip is not the way to go for very high performance. We have learned this lesson from the supercomputing community, which has paid an extremely high price for monstrous installations by following the wrong road map for decades."
In that same ISC issue from Thursday, be sure to read our Q&A with Alexander Reinefeld, who also talks about FPGAs. He's less confrontational than Hartenstein and offers some very worthwhile insights. Don't miss the Alan Turing quote at the end of the interview.
In Wednesday's special coverage of ISC, I'd like to point out a couple of special articles.
The first one is our interview with Horst Simon. In this Q&A, he talks about the challenges involved in obtaining meaningful petaflops performance, touching on some of the issues I talked about in last week's editorial commentary. Here's an excerpt that sums up Simon's concerns about petaflops myopia:
"... if in all this enthusiasm we settle for just the easy goals, such as the first peak or Linpack petaflop performance, we may have a "petaflop before its time." Once the peak and Linpack milestones are achieved in 2008 or so, the real hard work begins, the work of achieving petaflop performance in production computing environments."
Also in the Wednesday ISC issue, be sure to read Stephen Wheat's perspective on the Top500 list and the associated Linpack benchmark. Wheat, the Senior Director of Intel's HPC Platform Office, argues that the HPC world has grown too complex to rely on just a single metric anymore. From his point of view, many industry vendors and end-users are also reaching this conclusion. So what will we use to take the place of Linpack? Wheat sees no easy answers, but suggests that the new metrics will have to account for the growing diversity of the high performance computing market.
News this week
Too much HPC news to talk about. Every HPC vendor and analyst had something to say this week. What follows are just some of the announcements that caught my eye.
On Monday, Intel officially released its dual-core Xeon Processor 5100 series (formally code-named Woodcrest). Most of the usual OEM suspects had already announced they would be building platforms based out the new Intel server chip. However, this week SGI announced a new Altix XE cluster line based on the 5100 series chips. This is a first for SGI, which in the past has focused almost exclusively on high-end systems for the technical computing market. The new cluster line is part of the company's new strategy to make inroads into the enterprise space. Along with their existing Itanium-based Altix lines, SGI has apparently committed itself to be an Intel-only shop, at least for the time being.
HP introduced its new blade system solution for HPC at ISC this week. The blades support a 20 Gbps InfiniBand interconnect, twice as fast as the current IBM BladeCenter cluster systems. With the InfiniBand (IB) ecosystem moving from 10 to 20 Gbps, the IB vendors look like they're trying to get some bandwidth distance between their latest offerings and 10 Gigabit Ethernet solutions, which recently have approached InfiniBand's low latency performance using a variety of clever technologies. All of these increases in interconnect performance should make cluster platforms even more attractive than they are today.
IDC seems to think so too. On Wednesday, during the company's analyst briefing at ISC, research VP Christopher Willard predicted that by 2010 cluster systems will account for 75 to 85 percent of all high performance technical computing revenue (unless a major new disruptive technology comes along). The article goes on to describe some of the other factors that will effect cluster adoption.
Here are my predictions: By 2015 clusters will become the most common form of residential and commercial heating in the U.S. And by 2020, there will so many clusters in the world, there won't be any room for IT marketing executives. I didn't do any actual research to reach these conclusions; it's just wishful thinking.
Until next week...
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - June 29, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.