Visit additional Tabor Communication Publications
June 30, 2006
Thanks to this week's International Supercomputing Conference (ISC) in Dresden, Germany, today's issue of HPCwire is one of the largest of the year. ISC always seems to put a charge into the HPC community and this year was no exception. Here's a rundown on some of this issue's highlights as well as some pointers to a few exceptional articles from our special ISC coverage on Wednesday and Thursday.
If you're wondering where to begin, I encourage you to start with Thomas Sterling's supercomputing retrospective of the past year -- that is, June 2005 to June 2006. When you live and die for HPC, you celebrate the New Year on a slightly different schedule than the rest of the world. Thomas offers an insightful wrap-up of the most important supercomputing developments since last summer. He discusses multi-core evolution, the rising interest in heterogeneous architectures, and offers his own "State-of-the-Petaflops" address.
Also be sure to catch John Gustafson's perspective on the escalating energy crisis in supercomputing. John is the CTO of HPC at ClearSpeed, a company that knows a few things about energy-efficient high performance computing. The article is full of wit and wisdom. I highly recommend it.
Our feature interview this week is with Chuck Seitz, CEO and founder of Myricom. In talking with Chuck on many occasions over the last few months, I've come to realize that he's one of the real visionaries in HPC today. In our Q&A, Chuck describes some of the latest Myricom offerings, offers his thoughts on Microsoft's cluster computing solution, and gives us his take on the Ethernet versus Ethernot (specialty networks) debates.
If you missed Thurday's special coverage of ISC, be sure to go back and read Reiner Hartenstein's piece on Reconfigurable Computing. Hartenstein, a professor at TU Kaiserslautern in Germany, believes that FPGAs, and the paradigm they bring with them, is a game-changing model for supercomputing. He challenges system designers to alter their way of thinking about building next-generation HPC platforms. Here's an excerpt:
"... just putting more CPUs on the chip is not the way to go for very high performance. We have learned this lesson from the supercomputing community, which has paid an extremely high price for monstrous installations by following the wrong road map for decades."
In that same ISC issue from Thursday, be sure to read our Q&A with Alexander Reinefeld, who also talks about FPGAs. He's less confrontational than Hartenstein and offers some very worthwhile insights. Don't miss the Alan Turing quote at the end of the interview.
In Wednesday's special coverage of ISC, I'd like to point out a couple of special articles.
The first one is our interview with Horst Simon. In this Q&A, he talks about the challenges involved in obtaining meaningful petaflops performance, touching on some of the issues I talked about in last week's editorial commentary. Here's an excerpt that sums up Simon's concerns about petaflops myopia:
"... if in all this enthusiasm we settle for just the easy goals, such as the first peak or Linpack petaflop performance, we may have a "petaflop before its time." Once the peak and Linpack milestones are achieved in 2008 or so, the real hard work begins, the work of achieving petaflop performance in production computing environments."
Also in the Wednesday ISC issue, be sure to read Stephen Wheat's perspective on the Top500 list and the associated Linpack benchmark. Wheat, the Senior Director of Intel's HPC Platform Office, argues that the HPC world has grown too complex to rely on just a single metric anymore. From his point of view, many industry vendors and end-users are also reaching this conclusion. So what will we use to take the place of Linpack? Wheat sees no easy answers, but suggests that the new metrics will have to account for the growing diversity of the high performance computing market.
News this week
Too much HPC news to talk about. Every HPC vendor and analyst had something to say this week. What follows are just some of the announcements that caught my eye.
On Monday, Intel officially released its dual-core Xeon Processor 5100 series (formally code-named Woodcrest). Most of the usual OEM suspects had already announced they would be building platforms based out the new Intel server chip. However, this week SGI announced a new Altix XE cluster line based on the 5100 series chips. This is a first for SGI, which in the past has focused almost exclusively on high-end systems for the technical computing market. The new cluster line is part of the company's new strategy to make inroads into the enterprise space. Along with their existing Itanium-based Altix lines, SGI has apparently committed itself to be an Intel-only shop, at least for the time being.
HP introduced its new blade system solution for HPC at ISC this week. The blades support a 20 Gbps InfiniBand interconnect, twice as fast as the current IBM BladeCenter cluster systems. With the InfiniBand (IB) ecosystem moving from 10 to 20 Gbps, the IB vendors look like they're trying to get some bandwidth distance between their latest offerings and 10 Gigabit Ethernet solutions, which recently have approached InfiniBand's low latency performance using a variety of clever technologies. All of these increases in interconnect performance should make cluster platforms even more attractive than they are today.
IDC seems to think so too. On Wednesday, during the company's analyst briefing at ISC, research VP Christopher Willard predicted that by 2010 cluster systems will account for 75 to 85 percent of all high performance technical computing revenue (unless a major new disruptive technology comes along). The article goes on to describe some of the other factors that will effect cluster adoption.
Here are my predictions: By 2015 clusters will become the most common form of residential and commercial heating in the U.S. And by 2020, there will so many clusters in the world, there won't be any room for IT marketing executives. I didn't do any actual research to reach these conclusions; it's just wishful thinking.
Until next week...
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - June 29, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.