TAIPEI, Taiwan, Nov. 17 2020 — GIGABYTE Technology, an industry leader in high-performance servers and workstations, today announced support for the NVIDIA HGX AI supercomputing platform as GIGABYTE readies servers that feature air-cooled and liquid-cooled versions. These new servers (G492-ZD0, G492-ZL0, G262-ZR0 and G262-ZL0) will also accommodate the new NVIDIA A100 80GB Tensor core version of the NVIDIA HGX A100 that delivers over 2 terabytes per second of memory bandwidth and 2x larger NVIDIA Multi-instance GPU (MIG) instances. These new systems, with 600GB/s GPU-to-GPU bandwidth, are primed for supercomputing and will be deployed in data centers for HPC, AI, deep learning, data science, as well as applications using MIG for GPU partitioning, which maximizes the utility of every GPU.
Combined with out-of-the-box optimized AI models and applications from the NVIDIA NGC catalog. HGX A100 is the most powerful end-to-end AI and HPC platform for data centers. It allows researchers to rapidly deliver real-world results and deploy solutions into production at scale.
“More memory capacity and higher bandwidth are critical to achieving the performance required by today’s high-performance servers and workstations to drive the next wave of AI applications and scientific discoveries,” said Paresh Kharya, Senior Director at NVIDIA. “GIGABYTE servers powered by NVIDIA HGX A100 deliver the performance to make that happen.”
For the NVIDIA 4-GPU platform, GIGABYTE offers an air-cooled version, G492-ZD0, and a liquid-cooled version, G492-ZL0. There is also a GIGABYTE 2U chassis that offers an air-cooled version, G262-ZR0, and the liquid cooled, G262-ZL0. Additionally, the servers are NVIDIA Mellanox HDR InfiniBand ready and can deliver 200Gb/s of high-performance network connectivity.
Specific details about the new servers will come later this year, and the products will be available in Q1 2021.