Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

December 9, 2013

Air-Cooled Supercomputer Is Among World’s Greenest

Tiffany Trader
Wilkes GPU Cluster 2013 HPCS Data Centre

Unlike this November’s TOP500 list, which had very little churn in the top 10 (only one new system, Piz Daint, squeezed into the elite club) the most recent Green500 list’s upper echelon was filled with nothing but newcomers. While most are liquid-cooled, taking second place is an energy-efficient air-cooled supercomputer from the University of Cambridge, named Wilkes.

Like the TOP500 list, the Green500 list provides the computing community and other HPC stakeholders with an important point of reference, a yardstick by which to assess and compare systems. Where the TOP500 is concerned with how a system performs on the LINPACK benchmark, the Green500 takes those same systems and re-ranks them by energy-efficiency. Instead of performance (FLOPS), the Green500 reflects performance (FLOPS)-per-watt.

As of November 2013, the greenest machine of all was Tokyo Institute of Technology’s TSUBAME-KFC. Like most Green500 champs, TSUBAME-KFC employs some manner of liquid cooling, in this case an oil-immersion technique is used. The second-place finisher, however, Wilkes, has the distinction of being the most energy-efficient air-cooled system in the world, capable of performing at 3.6 gigaflops per watt.

Although bested by Japan’s oil-cooled TSUBAME-KFC supercomputer, 3.6 GFLOPS/watt is a remarkable achievement for an air-cooled system. Note also that the UK cluster has surpassed the efficiency of CINECA’s liquid-cooled Eurora system, the June 2013 Green500 star – which achieved an efficiency of 3.2 GFLOPS/watt.

Deployed in November at the University of Cambridge in partnership with Dell, NVIDIA and Mellanox, Wilkes also has the honor of being the UK’s fastest academic cluster as well as UK’s fastest GPU-powered supercomputer.

Wilkes was installed by the Cambridge HPC Service as part of the new “SKA Open Architecture Lab.” On the university’s website, HPC Services points out that the high energy-efficiency can be traced to two primary factors: the very high performance per watt provided by the NVIDIA K20 GPU and the energy efficiency obtained from the Dell T620 server.

The cluster is based on 128 Dell T620 servers and 256 NVIDIA K20 GPUs interconnected by 256 Mellanox Connect IB cards. The machine is capable of 240 teraflops LINPACK, which netted it a 166th position on the November TOP500 list. Wilkes received partial funding from STFC in order to advance computing system development for the SKA, a multinational collaboration to build the world’s largest radio telescope. At the center of this effort is a requirement for the world’s largest streaming data processor, which will necessitate HPC systems many times more powerful than today’s crop. Additional industry-backed sponsorship was provided by Rolls Royce and Mitsubishi Heavy Industries.

The Cambridge HPCS facility has the distinction of having two HPC systems in the top half of the TOP500. Along with the Wilkes Cluster at 240 teraflops, Cambridge is also home to an Intel-based CPU cluster, called Darwin, which clocks in at 184 LINPACK teraflops. Both systems are now housed within an ultra-efficient HPC datacenter, which uses a combination of evaporative coolers and back of rack water heat exchangers. With a spot PUE of 1.075, the new datacenter is 30 percent more energy efficient than the one it replaces. The upgrades and new purchases have resulted in an energy efficiency increase of 150 percent. In other words, the HPC facility produces 2.5 times the computational output for the same energy usage.

Share This