Researchers at Meta, MIT and other institutions connected servers with a dozen Nvidia GPUs with optical switches and a robotic arm, devising a new interconnect that could be used for machine learning. The fabric, called “TopoOpt,” can create network topologies on the fly depending on computing needs. The technology comes as high-performance computers are being strained by wider adoption of AI technologies like ChatGPT, which is testing the limits of Microsoft’s AI supercomputing.
A paper on the technology was presented at the USENIX Symposium on Networked Systems Design and Implementation being held this week.
TopoOpt uses algorithms to find the fastest parallel computing techniques based on information such as processing requirements, available computing resources, data routing techniques and network topology. The researchers also improved upon Nvidia’s AllReduce feature, which minimizes the communication time between GPUs and other components.
“TopoOpt creates dedicated partitions for each training job using reconfigurable optical switches and patch panels, and jointly optimizes the topology and parallelization strategy within each partition,” the researchers wrote.
The researchers tested TopoOpt within the Meta infrastructure, using a dozen Asus ESC4000A-E10 servers, each equipped with one A100 GPU, HPE NICs and a 100 Gbps Mellanox ConnectX5 NIC. The NICs had optical transceivers with breakout fibers.
“TopoOpt is the first system that co-optimizes topology and parallelization strategy for ML workloads and is currently being evaluated for deployment at Meta,” the researchers said.
The setup also uses a patch panel from Telescent that reconfigures a network using “a robotic arm that grabs a fiber on the transmit side and connects it to a fiber on the receive side,” the paper said. The robotic arm – which is software controlled – moves up and down to link up the transmit fiber with a receiver fiber anywhere in the system. That provides the flexibility and elasticity required to quickly reconfigure a network. Patch panels are already widely used in commercial applications, but are now being proposed for use in datacenters.
Google recently presented a paper detailing how it used an AI supercomputer with optical circuit switches to improve training speeds on its TPU v4 chips while keeping power consumption down. The optical circuit switching (OCS) in Google’s setup is not as mobile as a robotic arm, but uses mirrors to switch between input and output fibers. The Google setup was also a larger test bed, with an at-scale deployment across 4,096 TPUs.
The researchers opted for the patch panel as they found the Google-style optical switches to be “five times more expensive,” and that they also supported fewer ports. At the same time, the researchers said that OCS technology, like the one used in Google, is meant for at-scale deployments. “The main advantage of OCSs is that their reconfiguration latency is four orders of magnitude faster than patch panels,” the researchers wrote.
TopoOpt pre-provisions the compute and network requirements, and is ready to go once the servers are ready and the task is ready to deploy. “We already know the sequence of job arrivals and the number of servers required by each job,” the researchers wrote, adding that “this design allows each server to participate in two independent topologies.”
The researchers concluded that TopoOpt provided 3.4 times faster training iteration time than another technique called “fat-tree,” in which the networking backbone is the centerpiece of the infrastructure, which then deals data to multiple layers of static switches linking core networking back-end hardware to front-end servers. That technique is widely used today.
The use of optical networking in a datacenter is a new concept, and researchers are introducing the robotic arm and a new communication protocol as a cheaper way to build out AI networking infrastructure. The technology’s viability is being tested by Meta.