The 3D Torus Architecture and the Eurotech Approach

By Nicole Hemsoth

June 20, 2011

The 3D Torus architecture and the Eurotech approach

The ability of supercomputers to progressively run job faster meets the computational needs of both scientific research and an increasingly higher number of industry sectors.

Processor power is centerpiece to determine the performance of an HPC system, but it is not the only factor. One of the key aspects of parallel computers is the communication network that interconnects the computing nodes. The network is the one guaranteeing fast interactions between CPUs and allowing the processors to cooperate: this is essential to solve complex computational problems in a fast and efficient way.

Together with speed, HPC systems are increasingly asked to be more available. Downtime can affect a high performance machines quite badly. It appears clear that a reliable average machine with a great uptime is better than a high performance one with a low MTBF (mean time before faults): ultimately, the former would process more jobs in a week than the latter.

One additional challenge with large systems is scalability, so the ability to add nodes to a cluster without affecting performance and reliability or affecting them as little as possible. Petascale and then exascale installations require and will require hundreds of thousands of cores to efficiently work together.

It is also paramount for future machines to consume less energy, being cost and availability of electrical power an issue that is becoming the most demanding challenge to overcome along the road to exascale computing.

3D torus connectivity

In a computer cluster, the way the nodes are connected together could provide some determinant help to solve the above mentioned issues.

Despite being available for quite a while, the torus architecture has now the potential to surge from niche application to mainstream. This is because, like never before, we face nowadays some severe challenges posed by a raising number of nodes. The problem, before being one of performance, is one of topology and scalability. The more a system grows, the more fat tree switched topologies show limits of cost, maintainability, consumption, reliability and, above all, scalability.

Connecting nodes using a 3D Torus configuration means than each node in a cluster is connected to the adjacent ones via short cabling. The signal is routed directly from one node to the other with no need of switches. 3D means that the communication takes places in 6 different “directions”: X+, X-, Y+, Y-, Z+, Z-. In practical terms, each node can be connected to 6 other nodes: in this way, the graph of the connections resembles a tri-dimensional matrix.

Such configuration allows the addition of nodes to a system without degrading performance. Each new node is joint as an addition of a grid, linked to it with no extensive cabling or switching. While scaling linearly, with little or no performance loss, is strictly true for those problems that heavily rely on next neighbor communication, it is also true that, avoiding switches, hundreds of nodes can be added without causing problems of clogged links or busy fat tree switch leafs. This comes without considering that the addition of a node in a large system happens with much less working and potential troubles on a 3D torus network than on switched fat tree one. 

The pairwise connectivity between nearest neighbor nodes of a 3D Torus configuration helps to reduce latency and the typical bottlenecks of switched networks. Being the connections between nodes short and direct, the latency of the links is very low: this affects positively the machine performance, especially for solving local patters problems, which can be effectively mapped onto the matrix mentioned above. The switchless nature of the 3D Torus facilitates fast communication between nodes.

Switches are also potential points of failure. Decreasing their number should improve the operational functioning of a system. In other words, the 3D Torus makes a system more agile, so more prompt to react to failures: if a connection or a node fails, the affected communication can be routed in many different directions. The inherent nature of the 3D Torus is the connectivity of each node to its nearest neighbors to form a tridimensional lattice that guarantees multiple ways for a node to reach another one.

Eliminating costly and power-hungry external spine and leaf switches, as well as their accompanying rack chassis and cooling systems, torus architectures also contribute to reduce installation costs and energy consumption.

Applications

When it comes to applications that can fully benefit from the 3D Torus configuration we could touch one of the caveats of this intelligent connection schema.

The maximization of the performance of the 3D Torus takes place with a subset of problems which is rather large but specific. These are local pattern problems, which typically deal with modeling systems whose functioning/reaction depends on adjacent systems. Typical examples are computer simulations of Lattice QCD and fluid-dynamics. More in general, many Monte Carlo simulations and embarrassingly parallel problems can exploit the full performance advantage of the 3D Torus architecture, making the range of possible applications quite vast, especially within the field of scientific research.

Problems that require all to all dialogue between nodes are less prone to exploit the full performance of the 3D torus interconnection. However, independently from the type of application and problem, the 3D torus still bears the massive advantage of scalability and serviceability, contributing also to increase the availability of the systems and reduce power consumption. In case of large systems, it may be so advantageous to resort to the 3D Torus architecture that the perfect match with the problem that better maximize the computational performance may well become secondary.

The Eurotech approach

It is rather interesting to analyze what Eurotech, a leading European computer manufacturer, has done with the 3D Torus network of their supercomputer line Aurora.

Eurotech wanted to leverage the 3D Torus benefits for their high end products, but at the same time leave to the users the flexibility and the freedom to run all the applications they need.

Taking in account these diverging characteristics, Eurotech and its scientific partners took and approach called Unified Network Architecture in designing the Aurora datacenter clusters. This fundamentally means that the Aurora systems have 3 different networks working in concomitance on the same machine: 2 fast independent networks (Infiniband, 3D Torus) and a multi-level synchronization network.

The coexistence of Infiniband and 3D Torus facilitates flexibility of use: depending on the application, one or the other network can be utilized. The synchronization networks act at different levels, synchronizing the CPUs and de facto reducing or eliminating the OS jitter and hence making the system more scalable.

Torus topologies have traditionally been implemented with proprietary, costly application-specific integrated circuit (ASIC) technology. Eurotech chose to drive the torus with FPGAs, injecting more flexibility in the hardware, and to rely both on a GPL and on a commercial distribution for the 3D torus software. The 3D torus network is managed by a network processor implemented in the FPGAs, which interfaces the system hub through two x8 PCI Express Gen 3 connections, for a total internal bandwidth of 120Gbs.

Each link in the torus architecture is physically implemented by two lines (main and redundant) that can be selected (in software) to configure the machine partitioning (full 3D Torus or one of the many 3D sub-tori available). In this way, redundant channels allow system repartitioning on-the-fly. The possibility of partitioning the system into sub-domains permits to create system partitions that communicate on independent tori, effectively creating different execution domains. In addition, each subdomain can benefit from a dedicated synchronization network.

As an example of partitioning, if a backplane with 16 boards is considered, the available topologies for the partitioning of the machine in smaller sub-machines with periodic boundaries (sub-tori) are:

Half Unit: 2 x [1:2*NC] x [1:2*NR]

Unit: 4 x [1:2*NC] x [1:2*NR]

Double Unit:8 x [1:2*NC] x [1:2*NR], Rack: 8 x 2*NC x 2, Machine: 8 x 2*NC x 2*NR

Chassis: 16 x [1:NC] x [1, 2,  2*NR], Rack: 16 x NC x 2, Machine: 16 x NC x 2*NR

Where

–         NC : Number of chassis in a rack (8).
–         NR : Number of racks in a machine (from 1 to many hundreds)

Partitioning, FPGAs, redundant channels, synchronization networks are some of the unique characteristics that Eurotech wanted in the torus architecture to create Intel based clusters with the flavor of a special machines.

Eurotech

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

China’s Expanding Effort to Win in Microchips

July 27, 2017

The global battle for preeminence, or at least national independence, in semiconductor technology and manufacturing continues to heat up with Europe, China, Japan, and the U.S. all vying for sway. A fascinating article ( Read more…

By John Russell

Hyperion: Storage to Lead HPC Growth in 2016-2021

July 27, 2017

Global HPC external storage revenues will grow 7.8% over the 2016-2021 timeframe according to an updated forecast released by Hyperion Research this week. HPC server sales, by comparison, will grow a modest 5.8% to $14.8 Read more…

By John Russell

Exascale FY18 Budget – The Senate Provides Their Input

July 27, 2017

In the federal budgeting world, “regular order” is a meaningful term that is fondly remembered by members of both the Congress and the Executive Branch. Regular order is the established process whereby an Administrat Read more…

By Alex R. Larzelere

HPE Extreme Performance Solutions

HPE Servers Deliver High Performance Remote Visualization

Whether generating seismic simulations, locating new productive oil reservoirs, or constructing complex models of the earth’s subsurface, energy, oil, and gas (EO&G) is a highly data-driven industry. Read more…

India Plots Three-Phase Indigenous Supercomputing Strategy

July 26, 2017

Additional details on India's plans to stand up an indigenous supercomputer came to light earlier this week. As reported in the Indian press, the Rs 4,500-crore (~$675 million) supercomputing project, approved by the Ind Read more…

By Tiffany Trader

Exascale FY18 Budget – The Senate Provides Their Input

July 27, 2017

In the federal budgeting world, “regular order” is a meaningful term that is fondly remembered by members of both the Congress and the Executive Branch. Reg Read more…

By Alex R. Larzelere

India Plots Three-Phase Indigenous Supercomputing Strategy

July 26, 2017

Additional details on India's plans to stand up an indigenous supercomputer came to light earlier this week. As reported in the Indian press, the Rs 4,500-crore Read more…

By Tiffany Trader

Tuning InfiniBand Interconnects Using Congestion Control

July 26, 2017

InfiniBand is among the most common and well-known cluster interconnect technologies. However, the complexities of an InfiniBand (IB) network can frustrate the Read more…

By Adam Dorsey

NSF Project Sets Up First Machine Learning Cyberinfrastructure – CHASE-CI

July 25, 2017

Earlier this month, the National Science Foundation issued a $1 million grant to Larry Smarr, director of Calit2, and a group of his colleagues to create a comm Read more…

By John Russell

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Fujitsu Continues HPC, AI Push

July 19, 2017

Summer is well under way, but the so-called summertime slowdown, linked with hot temperatures and longer vacations, does not seem to have impacted Fujitsu's out Read more…

By Tiffany Trader

Researchers Use DNA to Store and Retrieve Digital Movie

July 18, 2017

From abacus to pencil and paper to semiconductor chips, the technology of computing has always been an ever-changing target. The human brain is probably the com Read more…

By John Russell

The Exascale FY18 Budget – The Next Step

July 17, 2017

On July 12, 2017, the U.S. federal budget for its Exascale Computing Initiative (ECI) took its next step forward. On that day, the full Appropriations Committee Read more…

By Alex R. Larzelere

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This