Selecting the Most Effective InfiniBand Topology for Technical Computing

By Nicole Hemsoth

December 5, 2011

SGI® ICE 8400

Selecting the Most Effective InfiniBand Topology

Across a wide range of disciplines, InfiniBand technology now enables clusters that range from a few systems to the largest technical computing clusters in the world. In only a few years, clustering with InfiniBand has come to easily dominate the top 100 of the Top500 list of supercomputing sites (www.top500.org). As new grand-challenge problems and other computational challenges emerge, larger and larger clusters will be required. Even with now routine advances in processor speed and memory capacity, scaling with cluster size will likely remain the simplest way to grow computational capacity for the world’s most tenacious computational problems. While InfiniBand can be deployed in multiple topologies, choosing the optimum InfiniBand topology can be difficult, with trade-offs in terms of scalability, performance, and cost. SGI has considerable experience in the design and deployment of some of the largest InfiniBand clusters in existence.

While some vendor’s limitations drive them to push one topology choice above others, SGI understands that the best topology is one that matches the needs of the application. Based on high-performance AMD Opteron™ 6200 Series processors, the SGI® ICE 8400 system is designed for flexible and optimized InfiniBand topology configuration.

InfiniBand Topology Considerations and Trade-offs

SGI ICE supports multiple InfiniBand topology choices, including All-to-All, Fat Tree (CLOS), as well as Hypercube and Enhanced Hypercube topologies. Choosing the right topology involves understanding the needs of the application as well as comparing key metrics and cost implications.

SGI ICE Topology Choices

InfiniBand fabrics present different advantages and limitations. The SGI ICE system is designed to flexibly support multiple InfiniBand topologies, including:

  • All-to-All. All-to-All topologies are ideal for applications that are highly sensitive to Message Passing Interface (MPI) latency since they provide minimal latency in terms of hop-count. Though All-to-All topologies can provide non-blocking fabrics, and high bisection bandwidth, they are restricted to relatively small cluster deployments due to limited switch port counts.
  • Fat Tree. Fat Tree or CLOS topologies are well suited for smaller node-count MPI jobs. Fat Tree topologies can provide non-blocking fabrics and consistent hop counts resulting in predictable latency for MPI jobs. At the same time, Fat Tree topologies do not scale linearly with cluster size. Cabling and switching become increasingly difficult and expensive as cluster size grows, with very large core switches required for larger clusters.
  • Standard Hypercube. Standard Hypercube topologies are ideal for large node-count MPI jobs, provide rich bandwidth capabilities, and scale easily from small to extremely large clusters. Hypercubes add orthogonal dimensions of interconnect as they grow, and are easily optimized for both local and global communication within the cluster. Standard Hypercube topology provides the lightest weight fabric at the lowest cost with a single cable typically used for each dimensional link.
  • SGI Enhanced Hypercube. Adding to the benefits of Standard Hypercube topologies, SGI Enhanced Hypercube topologies make use of additional available switch ports by adding redundant links at the lower dimensions of the hypercube to improve the overall bandwidth of the interconnect

SGI ICE 8400: Designed for InfiniBand

The SGI ICE platform is fundamentally architected to provide cost-effective high-performance InfiniBand infrastructure. The SGI ICE 8400 platform in particular is capable of achieving industry-leading scalability without sacrificing application performance efficiency. The platform offers a variety of interconnect options that let organizations scale their applications across hundreds or thousands of processor cores.

The SGI ICE 8400 system can accommodate up to 16 compute blades within each Individual Rack Unit (IRU). The

IRU is a 10 rack unit (10U) chassis that provides power, cooling, system control, and network fabric for up to 16 blades via a backplane. Up to four IRUs are supported in each custom-designed 42U rack, with a choice of either air cooling or water cooling for all configurations. Each rack supports:

  • A maximum of four IRUs
  • Up to 2048 of AMD Opteron ™ 6200 series
  • A maximum of 12.2TB of memory (64 x 192GB)

Conclusion

Effective InfiniBand topology requires system architecture designed with scalability in mind. The SGI ICE system

was purposely designed for InfiniBand networking, and together with the high core density of  AMD Opteron 6200 Series processors, the platform is capable of achieving industry-leading density and scalability for a broad range of technical computing applications. Being the world’s only 16-core x86 processor, the AMD Opteron 6200 Series processor delivers unprecedented scalability for large HPC deployments. With a choice of supported InfiniBand topologies, the SGI ICE system is ideal for deploying InfiniBand clusters ranging from a single 16-node IRU to hundreds of racks and many thousands of nodes.

Selecting an appropriate InfiniBand topology requires careful consideration of applications, algorithms, and data sets, along with likely needs for scalability into the future. In the absence of benchmark data, having some basic knowledge of the application characteristics may be enough to guide topology choices. Extensive testing done by SGI has shown that applications are generally less sensitive to topology than kernel benchmarks, but that differences in performance become more pronounced as clusters grow in size. When global interconnect bandwidth is important, Enhanced Hypercube dual-rail is the raw performance leader. For smaller single-rail topologies, Fat Tree is often the most economical choice. As clusters grow, hypercube topologies gain scalability, performance and cost advantages, avoiding the external switching and cabling that is required for Fat Tree and All-to-All topologies. Having deployed some of the world’s largest open systems InfiniBand networks and clusters, SGI has the experience and expertise to help organizations choose the right equipment and networking topology to meet their most challenging computational problems.

For more information go to: www.sgi.com/go/amd

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

New Exascale System for Earth Simulation Introduced

April 23, 2018

After four years of development, the Energy Exascale Earth System Model (E3SM) will be unveiled today and released to the broader scientific community this month. The E3SM project is supported by the Department of Energy Read more…

By Staff

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’s introduction of an ARM-based system (XC-50) last November. Read more…

By John Russell

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Leading Solution Providers

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This