At the Optical Fiber Conference taking place from March 22-24 in Anaheim, Calif., Mellanox Technologies is announcing an “important milestone” on the road to High Data Rate (HDR) 200 Gb/s InfiniBand and Ethernet networks. At the trade show, the company is demonstrating 50 Gb/s silicon photonics optical modulators and detectors, which will comprise key elements of 200 Gb/s and 400 Gb/s LinkX cables and transceivers.
“We are the first company to lay out a strategy for 200 Gb/s that’s based on the very same QSFP form factor that 40 Gigabit networks, 56 Gb/s in the case of InfiniBand, and 100 Gigabit networks in the case of InfiniBand and Ethernet are all based on,” states Arlon Martin, senior director of marketing at Mellanox Technologies.
The 200 Gb/s target assumes a 4-lane port and a 50 Gb/s signal rate. With 36 4x QSFP ports on the front panel of Mellanox’s EDR Switch-IB, the 100 Gb/s switch delivers 7.2 Tb/s of aggregate throughput. Moving from QSFP28 to QSFP56 modules doubles the front panel density for next generation switches to a potential 14.4 Tb/s switching capacity.
Mellanox says it is planning to offer 50 Gb/s and 200 Gb/s Direct Attach Copper cables (DACs); copper splitter cables; silicon photonics based AOCs for reaches to 200m; and silicon photonics transceivers for reaches to 2km. The 200 Gb/s cables and transceivers will support previous generation 40 and 100 Gb/s networks.
Mellanox owns more than 100 patents in silicon photonics and the company leveraged that IP to develop these two key optical components. “With silicon photonics and with the technology we have, it gives us the capability to do high speeds in high dimensions in terms of size of chips and to do it within the power envelope that we can squeeze this into a small package,” says Martin. The modulator that Mellanox uses is 40 microns long and has achieved 60 gigahertz and higher in terms of performance. The detector architecture is germanium.
The physical dimension which governs the speed of the device is the width of the waveguide. “In order to make it very fast, we narrow the width of that waveguide to speed the conversion of electrons to photons,” Martin explains. “Our architecture is unique in that it really takes advantage of the semiconductor process, where we have good photolithography and we can narrow the width of that waveguide at the modulator section very precisely and that gives us very high-speed devices – we use the same physical effect when we make the detectors. We narrow the waveguide down and it makes the detector faster.”
Mellanox is anticipating that the first-generation 200 Gb/s devices will be in either the 4.5 watt class or the 5 watt class, offering a power per bit savings of around 50 percent. Mellanox’s current 100Gb/s devices are in the 3.5 watts class. Martin notes that the QSFP committee has added two new power classes to the QSFP module (relaxing the 3.5 watt limit) to dissipate more power.
“Each time we go up to a new generation of technology, the cost per bit goes down,” says Martin. This aligns with the needs of the hyperscale datacenters – the big cloud datacenters and Web 2.0 players – who want to move data faster and cheaper and in the same space and footprint. “They have been pushing for 100 Gigabit and they are pushing very hard for 200 and 400 Gigabit Ethernet. For today’s networks, 100 Gigabits in servers that are 25 Gigabits is a very good ratio but in two years’ time, those servers will be 50 Gig and if they are 50 Gig then we need 200 Gigabit networks so we can maintain that same symmetry.”
The high-performance computing side is very similar, he continues, with some distinctions, noting that the datacenter tends to rely on oversubscription, whereas on the HPC side, there is fat-tree or close networks or 1:1 oversubscription, which gives every CPU full access throughout the whole network. In this case, if CPU performance goes up, the whole network has to go up to scale with it, says Martin.
“HDR is the next point in which we want to be able to build and be able to have 200 Gb/s networks to support high-performance computing,” states Martin.
He adds that there is a lot of industry confusion as to what form factor to use beyond 100 Gb/s. “If you follow the minutiae of our industry, you’ll see many different form factors – like CFP and CFP2 and CFP4 and CFP8 – because many companies haven’t figured out how to put 200 and 400 Gigabit into a pluggable form that is backward compatible with today’s products,” he states. “If you take the cover off of our 100 Gb/s QSFP module, you’ll find it’s a very simple architecture; there’s no hermetic packages, there’s no complex assembly process — and we’re able to double the speed and use the very same manufacturing platform, the very same methodology and the very same form factor.”
“We expect there will be other people who will follow our lead in this area – but we are the first to announce what we are going to do at 200 Gb/s, how we are going to put it into the package and that we have the key components, the optical devices and the detectors, already completed in terms of the design work.”
Along with IBM and NVIDIA, Mellanox was selected to build the next generation leadership systems at Oak Ridge and Livermore (part of the CORAL procurement). Those systems, Summit and Sierra respectively will be deployed in the 2017 timeframe and will contain thousands of compute nodes comprised of IBM POWER CPUs and NVIDIA GPUs interconnected using a dual-rail Mellanox EDR 100 Gb/s InfiniBand interconnect.
Mellanox has previously said it expects to release HDR 200-gigabit per second technology in the 2017 time frame, but offered no additional guidance with respect to a release date at this time. Increased competition in the networking space is coming from Intel Corp. and its Intel Omni-Path Architecture, which will use a 48-port switch chip capable of 9.6 Tb/s of aggregate switching bandwidth. Mellanox’s Gilad Shainer has downplayed the competitive threat, stating he doesn’t think that Omni-Path can compete on application performance.