Visit additional Tabor Communication Publications
July 13, 2007
I'm not a big fan of the Top500 -- the list that ranks the 500 fastest supercomputers in the world. As most readers of this publication are aware, the rankings are based on the Linpack benchmark, which measures how well a system can perform a specific set of linear algebra calculations. As such, the benchmark provides some notion of how much floating point performance is theoretically possible from a given system. But since most HPC applications exhibit much more complex behavior than Linpack, the benchmark isn't that useful in determining real-world performance.
The most interesting aspect of the list is seeing how the different technologies and companies represented in the Top500 are trending, and this is one of the major reasons the mainstream IT press follows the semi-annual rankings. And of course, everyone loves a competition. As for me, I'd be interested in seeing a few other tidbits of information in the list.
For example, how would the Top500 systems fare on the HPC challenge (HPCC) benchmarks? The HPCC suite consists of seven codes (including Linpack) that measure a variety of performance characteristics, including memory bandwidth, system network communication capacity, and random memory update performance. Because of this, HPCC provides a more balanced view of how well a system might perform with real applications.
There are currently 134 HPC systems that have run at least some of the HPC challenge benchmarks; the results are listed on the HPCC website at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi. As one might suspect, the more traditional cluster systems don't fare as well on some of the tests, especially the ones that stress inter-processor communication. Here the proprietary system interconnects of the high-end IBM and Cray machines show much better performance than their cluster counterparts. For the past two years at the Supercomputing Conference & Expo, the HPPC competition has awarded the top three systems for each benchmark category. During its short history, top honors have gone to IBM Blue Gene and Cray XT3 systems, in that order.
Another useful piece of information is the performance per watt metric. If the Top500 organizers required that system power usage be specified with each submission, it would be a simple exercise to calculate Linpack performance per watt for a given machine. The HPCC folks could do the same. The Green500 website, maintained by Dr. Wu-chun Feng and Dr. Kirk W. Cameron at Virginia Tech, is attempting to fill that gap by encouraging HPC installations to provide this type of information. So far they have eight machines ranked. At 112.24 megaflops/watt, IBM Blue Gene/L currently holds the top spot as the most energy efficient system (for Linpack). To see the whole list, visit http://www.green500.org/Lists.html.
As the petaflop systems start hitting the streets over the next few years, the power issue will loom even larger. IBM claims its new Blue Gene/P architecture will achieve 350 megaflops/watt, an order of magnitude better than traditional cluster systems. If we go by the information provided by Sun Microsystems, their new 500-teraflop "Ranger" Constellation system to be installed at the Texas Advanced Computing Center later this year will achieve a very respectable 210 megaflops/watt. According to the Cray XT4 datasheet, that system achieves between 40 and 70 megaflops/watt, depending on the configuration (I'm assuming the information is only applicable to dual-core Opteron configurations.)
Maybe the most important information missing from the Top500 list is the context of those systems within the larger HPC community. Specifically, how much high performance computing is taking place in the Top500 versus all the other HPC systems out there -- what I'll call the "Sub500." Over the past year, the aggregate capacity of the 500 fastest machines almost doubled, going from 2.79 petaflops to 4.92 petaflops. So how much HPC capacity is in the Sub500? And maybe more importantly, did the Sub500 capacity double over the past year as well?
The answer to the last question would tell us if HPC use is getting broader or just deeper. If the former is true, that is, if Sub500 users at least doubled their HPC capacity last year, then true democratization is occurring. But if it's a matter of the rich getting richer, that would suggest that high-end HPC is still in the driver's seat. The more complex answer is that both trends are occurring in tandem, but at any given time one is dominant. But which one?
There is a sense that the "center of mass" for high performance computing is moving downward. According to Chris Willard, senior research consultant with Tabor Research, "[C]apacity growth at the low end of the market is driven by growth in the number and sophistication of users. There is a lot of room for growth here both as more companies come on board, and as recent entries move from proof of concept to production computing. In contrast, the high-end users are pretty much a fixed market -- the world is willing to spend roughly $1 billion a year on top-of-the-line supercomputers and that has not changed over the last two or three decades."
There's little doubt that overall HPC capacity is growing. Over the past few years, high performance and technical computing revenues have increased at a rate exceeding 20 percent (while price/performance continues to improve). And if you can believe IDC, this growth is essentially taking place at the low end of the market, driven by the demand for small- and medium-sized cluster systems. But the standard method of data collection for this kind of analysis may tend to favor the low end of the market. For example, some vendors only report computer node sales, not cluster or systems sales. And there's no way of telling how nodes are configured after purchase. They may be used as standalone servers or be incorporated into larger systems. To the observer, they all look like low-end systems.
Even assuming the market growth is almost exclusively occurring in the Sub500, I'm not convinced that gains in performance capacity are following the same pattern. Unfortunately, a detailed breakdown of the numbers is hard to come by. As noted above, even simple data collection methodologies have their limitations. And maintaining a list of all HPC systems and computer nodes shipped over the past several years, calculating the capacity of each one, and then determining which machines are in use and which are retired, would be almost impossible. So I'm left wondering.
If the proponents of massive-scale computing are correct, big systems will inherit the IT landscape. In this scenario, computational power will consolidate into larger, fewer machines and most computing will be accessed as a service via a utility model (a la Sun Microsystems' Network.com). Some have even suggested that a handful of computers may be all that's required for the entire world's computing needs. If that's our future, then at some point the Top500 list will look pretty sparse.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - July 12, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.