Considerations in HPC Cluster Network Selection

By Alex Woodie

July 17, 2013

There’s a lot going on in the networks of HPC clusters, and selecting the right network fabric, equipment, and topology is important to ensuring good performance for given applications. A “one size fits all” approach rarely works, and architects will do well to tailor the network to the needs of the application. 

In a recent report, titled “The Role of High-Bandwidth, Low-Latency Interconnects in High Performance Clusters,” Sebastian Kalcher, lead HPC architect at Adtech Global and a former HPC and high-speed interconnect engineer at CERN, discusses the important role of the network in an HPC cluster the various design considerations that should be taken into account.

Today’s HPC applications are very dependent on high-bandwidth, low-latency interconnects to move data among the various nodes of cluster, Kalcher says. As clusters get bigger, efficiency becomes a bigger concern, and low-overhead protocols that can help to eliminate wasting compute power becomes even more important.

When selecting the main fabric to be used for an HPC cluster network, the choice often comes down to two options: QDR/FDR InfiniBand or 1G/10G-Ethernet. Users should look at the communication patterns of the HPC application at hand–including the size of messages being sent and the level of latency that is acceptable–to make the best decision, Kalcher says.

For example, are the parallel processes communicating large chunks of data with their peers? And if so, are they communicating with all others or maybe only with their neighbors? “This can have a direct effect on a suitable network topology (and with that, on the overall cost of the fabric),” Kalcher says in his paper

On the other hand, some communication patterns are dominated by the exchange of smaller control messages, in which case, latency might be the more important issue. “The size of the actual messages that are exchanged can have an effect on the overall performance,” he writes.

InfiniBand is the choice for many general purpose HPC clusters, thanks to is high throughput and low latency. And thanks to the low overhead 64b/66b encoding scheme used in Fourteen Data Rate (FDR) InfiniBand, very high data rates (up to 54.55 Gbps) can be achieved, while dedicating fewer CPU cycles on message copying, protocol handling, or checksum calculation than QDR or DDR, which use an 8b/10b encoding scheme.

InfiniBand also delivers flexibility in the network topology. Most clusters use a fat-tree topology, with 36-port switches arranged in a tree structure as the building block, according to Kalcher’s paper. Depending on whether flexibility or cost is the main goal, the HPC network architect can choose different topologies.

When the edge switches in an HPC cluster have an equal number of InfiniBand links going to the core switches and to the processor nodes, it is considered to have 1:1 bisectional bandwidth. This is the most flexible and fault-tolerant topology, but it is also the most expensive. Depending on the application, a bisectional bandwidth ratio of 1:3 (twice as many links to compute nodes as to core switches) may deliver the required performance, at a lower cost.

Related Articles

The BlueBEAR Cluster Goes Online in the UK

Cycle Computing and the HPC Experiment

To Build or to Buy Time: That is the Question

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that simulating physical systems could be done most effectively Read more…

By John Russell

RIKEN and CEA Mark One Year of Exascale-focused Collaboration

July 16, 2018

RIKEN in Japan and the French Alternative Energies and Atomic Energy Commission (CEA) formed a five-year cooperative research effort on January 11, 2017, to advance HPC and prepare for exascale computing (see HPCwire co Read more…

By Nishi Katsuya

AI Thought Leaders on Capitol Hill

July 14, 2018

On Thursday, July 12, the House Committee on Science, Space, and Technology heard from four academic and industry leaders – representatives from Berkeley Lab, Argonne Lab, GE Global Research and Carnegie Mellon University – on the opportunities springing from the intersection of machine learning and advanced-scale computing. Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Are Your Software Licenses Impeding Your Productivity?

In my previous article, Improving chip yield rates with cognitive manufacturing, I highlighted the costs associated with semiconductor manufacturing, and how cognitive methods can yield benefits in both design and manufacture.  Read more…

HPC Serves as a ‘Rosetta Stone’ for the Information Age

July 12, 2018

In an age defined and transformed by its data, several large-scale scientific instruments around the globe might be viewed as a ‘mother lode’ of precious data. With names seemingly created for a ‘techno-speak’ glossary, these interferometers, cyclotrons, sequencers, solenoids, satellite altimeters, and cryo-electron microscopes are churning out data in previously unthinkable and seemingly incomprehensible quantities -- billions, trillions and quadrillions of bits and bytes of electro-magnetic code. Read more…

By Warren Froelich

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

AI Thought Leaders on Capitol Hill

July 14, 2018

On Thursday, July 12, the House Committee on Science, Space, and Technology heard from four academic and industry leaders – representatives from Berkeley Lab, Argonne Lab, GE Global Research and Carnegie Mellon University – on the opportunities springing from the intersection of machine learning and advanced-scale computing. Read more…

By Tiffany Trader

HPC Serves as a ‘Rosetta Stone’ for the Information Age

July 12, 2018

In an age defined and transformed by its data, several large-scale scientific instruments around the globe might be viewed as a ‘mother lode’ of precious data. With names seemingly created for a ‘techno-speak’ glossary, these interferometers, cyclotrons, sequencers, solenoids, satellite altimeters, and cryo-electron microscopes are churning out data in previously unthinkable and seemingly incomprehensible quantities -- billions, trillions and quadrillions of bits and bytes of electro-magnetic code. Read more…

By Warren Froelich

Tsinghua Powers Through ISC18 Field

July 10, 2018

Tsinghua University topped all other competitors at the ISC18 Student Cluster Competition with an overall score of 88.43 out of 100. This gives Tsinghua their s Read more…

By Dan Olds

HPE, EPFL Launch Blue Brain 5 Supercomputer

July 10, 2018

HPE and the Ecole Polytechnique Federale de Lausannne (EPFL) Blue Brain Project yesterday introduced Blue Brain 5, a new supercomputer built by HPE, which displ Read more…

By John Russell

Pumping New Life into HPC Clusters, the Case for Liquid Cooling

July 10, 2018

High Performance Computing (HPC) faces some daunting challenges in the coming years as traditional, industry-standard systems push the boundaries of data center Read more…

By Scott Tease

Meet the ISC18 Cluster Teams: Up Close & Personal

July 6, 2018

It’s time to meet your ISC18 Student Cluster Competition teams. While I was able to film them live at the ISC show, the trick was finding time to edit the vid Read more…

By Dan Olds

PRACEdays18 Keynote Allan Williams (Australia/NCI): We’re Open for Business Down Under!

July 5, 2018

The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened with a plenary session on May 29, 2018 Read more…

By Elizabeth Leake (STEM-Trek for HPCwire)

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This