Visit additional Tabor Communication Publications
November 21, 2011
Despite the top 10 supercomputers in the world remaining unchanged from last June, there are some signs that supercomputers overall are getting more energy efficient. The top 10 systems on the new Green500 list average 1530.4 MFLOPS/watt while running Linpack; the 10 ten from last June averaged just 1087.0 MFLOPS/watt. That 50 percent increase in performance/watt is a little bit misleading inasmuch as the top 10 in the list are not representative of average supers.
Here are how the current top green systems stack up as of November:
1. IBM Rochester, Blue Gene/Q (2026.48 MFLOPS/W)
2. IBM Thomas J. Watson Research Center, Blue Gene/Q (2026.48 MFLOPS/W)
3. IBM Rochester, Blue Gene/Q (1996.09 MFLOPS/W)
4. DOE/NNSA/LLNL, Blue Gene/Q (1988.56 MFLOPS/W)
5. IBM Thomas J. Watson Research Center NNSA/SC, Blue Gene/Q Prototype (1689.86 MFLOPS/W)
6. Nagasaki University, DEGIMA Cluster (1378.32 MFLOPS/W)
7. Barcelona Supercomputing, Center Bullx B505 (1266.26 MFLOPS/W)
8. TGCC/GENCI, Curie Hybrid Nodes, Bullx B505 (1010.11 MFLOPS/W)
9. Chinese Academy of Sciences, Mole-8.5 Cluster (963.70 MFLOPS/W)
10. Tokyo Institute of Technology HP ProLiant SL390s G7 (958.35 MFLOPS/W)
As you can see, the top five most energy efficient supers are all Blue Gene/Q systems -- some housed at IBM facilities, the others at early deployment sites at DOE labs. Blue Gene/Q was officially launched by IBM during SC11, and large deployments are on tap for Argonne National Lab (Mira, 10 petaflops) and Lawrence Livermore National Lab (Sequoia, 20 petaflops) next year.
The next five systems are all accelerated with GPUs -- NVIDIA parts in four of them, with the remaining system using ATI Radeon graphics processors. All the supercomputers accelerated by IBM's now defunct HPC Cell processor (PowerXCell 8i) are now much further down the list.
It's notable that the BG/Q systems are about twice as efficient as the GPU-accelerated machines, such the number 10 TSUBAME system at Tokyo Tech. That's a significant data point, given that GPU supercomputing is being promoted by NVIDIA and others GPU computing enthusiasts as an energy-efficient alternative to CPU-only systems.
Of course, Blue Gene/Q relies on a custom ASIC and interconnect, while the GPUs in these machines are based on commodity graphic processor designs and are tied together by standard InfiniBand. There are no x86-only systems that can compete with GPUs on a FLOPS/watt basis right now, but the Blue Gene/Q design certainly demonstrates what is possible for a purpose-built HPC processor and custom system network.
The other interesting top ten factoid is that all five Blue Gene/Q systems are housed in the US, while the five GPU-powered machines are deployed outside of it. That's mostly a coincidence, but does point to the slow start of high-end GPU supercomputing in the United States, and the US-centric nature of the early BG/Q deployments. No doubt, these two architectures will show a more international mix in the months and years ahead.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.