In the competitive global HPC landscape, system and processor vendors, nations and end user sites certainly get a lot of attention–deservedly so–but more than ever, the network plays a crucial role. While fast, performant interconnects have always been a critical enabler of HPC machinery, that is increasingly the case in the age of heterogeneous computing architectures. By the latest edition of the Top500 released at ISC in Frankfurt last month, it can be seen that while InfiniBand is the second most-used internal system interconnect technology on the list, trailing Ethernet (140 to 247), IB continues to connect the majority of Top500 list HPC systems.
InfiniBand technology is used in the top three fastest systems, including the new Linpack leader Summit at Oak Ridge National Laboratory, and nearly 60 percent of the HPC category. These are systems teased out by Mellanox as being used for actual high-performance computing applications rather than cloud/Web workloads. The InfiniBand Trade Association notes that the HPC category has the requirement of “high bandwidth and compute efficiency for processing massive, complex data sets.” By its measure, nearly half of the platforms on the Top500 list can be categorized as non-HPC application platforms (mostly Ethernet-based).
Of the twenty-seven countries that submitted to the June 2018 Top500 list, InfiniBand is used in half of the countries’ number-one HPC systems–including US, China, Japan and 11 more (see article notes for complete list)–accounting for nearly 78 percent of total “number-one” flops. (Note: the top-ranked machines in Brazil, Ireland and the Netherlands are Ethernet-based cloud systems, built by Lenovo.)

Out of the total 500 grouping, Mellanox technology connects 216 systems, a 13 percent jump in system share from six months prior. InfiniBand, however, continues to lose ground to Ethernet as non-traditional “supercomputers” from the Web and cloud sphere continue to enter the list. There are currently 140 InfiniBand machines (including Sunway in China, which employs a semi-custom variant of Mellanox InfiniBand), down nearly 15 percent from November’s total 164 systems. 247 systems on the latest list use some manner of Ethernet (with Mellanox counting 76 of these), up from 228 in November, an 8 percent climb.
Intel’s Omni-Path Architecture (OPA) is the interconnect on 39 machines, and 48 systems make use of Cray (Aries/Gemini) technology. That leaves 26 systems using some other flavor of custom/proprietary interconnect (BlueGene, Fujitsu Tofu, Bull BXI, NUDT’s TH Express-2, etc).
To a great extent, InfiniBand’s prominence in Top500-class HPC systems is a question of available alternatives. OPA has made a decent showing since its debut two-and-a-half years ago and is a credible challenger, but Mellanox is ahead in 200 Gpbs technology readiness. The InfiniBand vendor says it will begin shipping its 200 Gbps HDR adapters and switches this year, while Intel’s next-generation interconnect, OPA200, is not expected until 2019.
In discussing Mellanox’s strategy at ISC, Mellanox’s Scot Shultz emphasized the company’s offloading design approach, its ongoing effort to push more intelligence into the network to reduce the burden on system CPUs. Workload requirements, standardization, tools, user comfort and confidence–and to some extent price–certainly all play a role in system network selection.
Mellanox, which has faced scrutiny from activist investor Starboard around its technology R&D investment, is enjoying a positive earnings trend in 2018. In its second quarter ending June 30, 2018, Mellanox recorded revenue of $268.5 million, an increase of 26.7 percent, compared to $212.0 million in the second quarter of 2017. Half-way through 2018, Mellanox is up 29.7 percent year-over-year ($519.5 million for the first half of 2018, compared to $400.6 million for the same period last year).
“We continue to see strong traction with our 25 gigabit per second and above solutions as they become the preferred solution of choice in hyperscale, cloud, high performance computing, artificial intelligence, storage, financial services and other markets across the globe,” stated Eyal Waldman, president and CEO of Mellanox Technologies. “Our Ethernet revenue grew 81 percent year-over-year driven by network adapter and switch growth with hyperscale and OEM customers. We are proud to see our InfiniBand solutions accelerate the world’s top three, and four of the top five supercomputers, as seen in the recently published TOP500 supercomputers list. Our performance in the second quarter further shows the benefit of our investment in diversifying our revenue base and the operational focus that is driving our higher profitability.”
Mellanox reached an agreement with Starboard in June.
Latest Mellanox number-one HPC machines by country:
USA: Oak Ridge National Laboratory – Summit
China: National Supercomputer Center in Wuxi – TaihuLight
Japan: AIST – AI Bridging Cloud Infrastructure (ABCI)
Italy: Exploration & Production Eni S.p.A. – HPC4
Germany: Forschungszentrum Juelich – JUWELS Module 1
Canada: University of Toronto – Niagara
Russia: Moscow State University – Lomonosov 2
Australia: National Computational Infrastructure National Facility – Raijin
Poland: Cyfronet – Prometheus
Czech Republic: IT4Innovations National Supercomputing Center – Salomon
Netherlands: SURFsara – Cartesius 2
South Africa: Centre for High Performance Computing – Lengau
Singapore: National Supercomputing Centre Singapore – NSCC
Norway: UNINETT Sigma2 AS – Fram