Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

By Tiffany Trader

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and as expected, there was no major new silicon announcement (or any mention of what’s next on the roadmap). Chip consumers can only handle so many refreshes and V100 just came out in September – but Nvidia did have a few surprises in store for HPC hardware aficionados. The de facto server maker is announcing a higher-memory V100, an upgraded DGX server and most impressively, Nvidia’s first ever switch technology.

In front of nearly 8,000 attendees at the San Jose Convention center in a two-and-a-half hour keynote, Nvidia CEO Jensen Huang announced that the company is upgrading its Tesla V100 products (SXM and PCIe modules) to now have 32 GB of memory each, a 2X boost, to help data scientists train deeper larger models and to boost the performance of memory-constrained HPC applications.

The original V100s had 16GB GPUs with a 4-hi stack of HBM2 memory; now the chip has an 8-hi stack of HBM2. All the other stats are the same (floating point numbers, CUDA cores, thermals, electricals), which will be a relief to channel partners and end users who just made investments in the V100 and related components. The larger memory GPU helps with larger networks, enabling larger batch sizes and more training in parallel. Nvidia said it’s seeing neural machine translation and large-scale FFTs, the latter commonly used in the oil and gas industry and signal processing, executing about 50 percent faster.

Huang also unveiled the 16-GPU DGX-2 server – and like the ‘2’ in the name, the box offers delivers two petaflops at half-precision (FP16), twice the computational amperage of the first-iteration eight-GPU DGX-1 unit. So how did Nvidia pack 16-NVlinked GPUs into one server if the V100s GPUs only have 6 ports? Well that brings us to the next part of the hardware reveal: the NVLink Switch, or NVSwitch.

Nvidia’s NVSwitch

The NVSwitch extends the innovations of Nvidia’s NVLink interconnect and offers 5x higher bandwidth “than the best PCIe switch,” according to Nvidia. It’s an 18-port fully connected crossbar switch (comprised of 12 switch ASICs) that allows users to build an NVLink fabric. Each port delivers 50 GB/sec for a total of 900 GB/sec of aggregate NVLink bi-directional bandwidth in a single device. “It is a fully-connected crossbar internally – at every port connected to every other port at full speed,” said Ian Buck in a pre-announcement press briefing yesterday. “We spared no expense on the design of this to make sure we would never be limited by GPU to GPU communication.”

The DGX-2 is a monster. The 10U, 350 lb mega-server houses two boards, with eight v100s 32GBs and six NVSwitches on each, enabling the GPUs to communicate at a record 2.4 TB per second. All those GPUs and switches consume a lot of power and the entire machine can burn a turbine-spinning 10,000 watts.

The server supports both InfiniBand or 100G Ethernet and increases the system memory to 1.5 TB (LRDIMM), up from 512 GB. It has two Intel Xeon Platinum (Skylake-SP) CPUs and 30 TB of NVMe SSDs, expandable up to 60 TB. Since not every user wants to take advantage of all 16 GPUs all the time and to make the product cloud-friendly, Nvidia is announcing full KVM support, so the system can either run all 16 GPUs with NVSwitch, or it can be segmented down to a single GPU.

Nvidia announced that with DGX-2 it has taken the training time of FAIRSeq, a neural machine translation model, from 10 days (on the V100-equipped DGX-1) down to 1.5 days, a 10x improvement in six months. Nvidia claims that it would take an equivalent of about 300 Skylake servers to get that same performance into a single server.

The price tag for the DGX-2 server is $399,000 and availability is scheduled for the third quarter.

The 32GB v100 GPU is available immediately across Nvidia’s entire DGX portfolio and it will also be available from major computer manufacturers, including IBM, Cray, Hewlett Packard Enterprise, Lenovo, Supermicro and Tyan. Oracle announced that it will offer Tesla V100 32GB in the Oracle Cloud infrastructure the second half of 2018.

Nvidia also announced it has updated its deep learning stack with new versions of Nvidia CUDA, TensorRT, NCCL and cuDNN. Nvidia said it has crossed the 8 million mark for total number of CUDA downloads, more than half of those in the last year.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Debuts Turing Architecture, Focusing on Real-Time Ray Tracing

August 16, 2018

From the SIGGRAPH professional graphics conference in Vancouver this week, Nvidia CEO Jensen Huang unveiled Turing, the company's next-gen GPU platform that introduces new RT Cores to accelerate ray tracing and new Tenso Read more…

By Tiffany Trader

HPC Coding: The Power of L(o)osing Control

August 16, 2018

Exascale roadmaps, exascale projects and exascale lobbyists ask, on-again-off-again, for a fundamental rewrite of major code building blocks. Otherwise, so they claim, codes will not scale up. Naturally, some exascale pr Read more…

By Tobias Weinzierl

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum technology used. One idea is to mitigate noisiness and perh Read more…

By John Russell

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Super Problem Solving

You might think that tackling the world’s toughest problems is a job only for superheroes, but at special places such as the Oak Ridge National Laboratory, supercomputers are the real heroes. Read more…

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak) supercomputer that will be used to advance early-stage R&a Read more…

By Tiffany Trader

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum Read more…

By John Russell

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

SLATE Update: Making Math Libraries Exascale-ready

August 9, 2018

Practically-speaking, achieving exascale computing requires enabling HPC software to effectively use accelerators – mostly GPUs at present – and that remain Read more…

By John Russell

Summertime in Washington: Some Unexpected Advanced Computing News

August 8, 2018

Summertime in Washington DC is known for its heat and humidity. That is why most people get away to either the mountains or the seashore and things slow down. H Read more…

By Alex R. Larzelere

NSF Invests $15 Million in Quantum STAQ

August 7, 2018

Quantum computing development is in full ascent as global backers aim to transcend the limitations of classical computing by leveraging the magical-seeming prop Read more…

By Tiffany Trader

By the Numbers: Cray Would Like Exascale to Be the Icing on the Cake

August 1, 2018

On its earnings call held for investors yesterday, Cray gave an accounting for its latest quarterly financials, offered future guidance and provided an update o Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17


AMD @ SC17


ASRock Rack @ SC17

ASRock Rack



DDN Storage @ SC17

DDN Storage

Huawei @ SC17


IBM @ SC17


IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17


Lenovo @ SC17


Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17


Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17


Tyan @ SC17


Univa @ SC17


  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This