July 3, 2014

Team EPCC Reveals Secret Weapon Behind Winning Linpack Score

Tiffany Trader
Student Cluster Comp 2014 TeamEPCC

When Team EPCC first announced that they had earned “Highest Linpack” at ISC’14 Student Cluster Competition with a score of 10.14 teraflops, they said to stay tuned for details on their strategy. In a recent blog, the four students, all in the HPC masters program, describe how they achieved the highest-ever Linpack score in the Student Cluster Competition, breaking the 10 teraflops mark in the process.

The team explains that the HPCC and Linpack benchmarks are included in the competition every year, but there are also three additional applications that students learn about a few months before the competition takes place. This gives them the opportunity to modify their design, learn how to use the applications and optimize them for their system. There are also two additional “surprise” challenges that are not announced until the competition is underway.

The EPCC team is particularly grateful to their sponsor, Boston Limited, as well as vendor CoolIT Systems for providing them with cutting-edge hardware and technology.

The setup is described thusly:

“The cluster used for the competition had 4 nodes, each incorporating 2 Intel Xeon E5 2680 v2 CPUs, 2 NVIDIA K40 GPUs and 64GB DDR-3 Registered ECC Memory and Intel 510 Series SSDs (7 in total). In terms of interconnect we used Mellanox 12-Port 40/56GbE.”

Cooling was mainly done by way of liquid cooling with the CoolIT Rack DCLC AHx cooling system mounted directly onto the Intel Xeon E5-2680 v2 CPUs and NVIDIA K40 GPUs. In this setup, both the processor and GPU accelerator heat outputs into circulating liquid, which transports it to a liquid-to-air exchanger mounted on the top of the rack.

The system was designed with the Linpack benchmark as a primary aim, since the “Highest Linpack” award is the only one where results are compared from year to year.

“We chose to incorporate NVIDIA K40 accelerators in our system,” states the team, “as the data we collected on benchmarks showed that they provide very high flops per watt. We decided that since the majority of computation would be taking place in the GPUs, we would eliminate as much overhead as possible, having an equal amount of CPUs and GPUs in the final configuration.”

The team really considers liquid cooling as its secret weapon. The CoolIT technology enabled them to remove a big percentage of fans from the system. Only four of the original 20 fans were left on each server. The heat exchange consumed only about 90W in total and this was reduced even more by turning off some of the heat exchange fans.

The team also devoted many hours to testing to determine the optimum configuration given the cluster design and the competition’s 3kW power limits.

“By keeping detailed documentation of every test performed we were able to quickly adapt to changes in our hardware configuration and drain every flop possible out of every watt the system consumed,” they state.

The team decided to use all 80 of the system’s CPU cores as well as all eight GPUs with their base clock of 745MHz. Because the HPL binary made efficient use of the GPUs, the minimal benefit provided by higher clocks would not have been worth the additional power consumption, according to the students.

Check out the team’s blog for a full account.

Share This