Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing tasks as well as AI development with the addition of a container registry designed to deploy its GPU cloud for everything from visualization to drug discovery.
In its drive to expand access to its Volta architecture, Nvidia announced the availability of its Tesla V100 GPU on Microsoft Azure. Azure is the latest cloud service to join the chipmaker’s growing list of public and private cloud services providers along server makers. Most are offering the “GPU accelerated” services for AI development projects such as training deep learning models that require more processing cores and access to big data.
Moving beyond the AI market, Nvidia on Monday (Nov. 13) unveiled a container registry designed to ease deployment of HPC applications in the cloud. The container registry for scientific computing applications and visualization tools would connect researchers with most GPU-optimized HPC software, the company said during this week’s SC17 conference in Denver.
Last month, the company introduced deep learning applications and AI frameworks in its Nvidia GPU Cloud (NGC) container registry. The AI container registry was rolled out on Amazon Web Services’ Elastic Compute Cloud instances running on Tesla V100 GPUs.
The HPC application containers announced this week include a long list of third-party scientific applications. HPC visualization containers are available in beta on the GPU cloud.
As GPU processing moves wholesale to the cloud and datacenters, easing application deployment was the next logical step as Nvidia extends its reach beyond AI development to scientific computing. (The company notes that the 2017 Nobel Prize winners in chemistry and physics used it CUDA parallel computing and API model. Nvidia’s Volta architecture includes more than 5,000 CUDA cores.)
HPC containers are designed to package the libraries and dependencies needed to run scientific applications on top of container infrastructure such as Docker Engine. The cloud container registry for delivering HPC applications uses Nvidia’s Docker distribution to run visualizations and other tasks in GPU-accelerated clouds. The service is available now.
Underpinning these scientific workloads in the cloud is the Volta architecture, asserts Nvidia CEO Jensen Huang. “Volta has now enabled every researcher in the world to access…the most advanced high-performance computer in the world at the lowest possible price,” Huang claimed during SC17. “You can rent yourself a supercomputer for three dollars” per hour.
The other part of the GPU equation is the software stack and how it remains optimized. Hence, Nvidia has placed software components in the GPU cloud via its container registry. The containerized software stack can then be downloaded from Nvidia’s cloud and datacenter partners.
Emphasizing Nvidia’s drive to make GPU processing more accessible, Huang concluded: “In the final analysis, it’s got to be simple.”
Nvidia also took advantage of the SC17 launch pad to announce it is building a new supercomputer to enable high-performance workflows inside its company. SaturnV with Volta continues the tradition of the DGX-1 SaturnV that the company announced last year at SC16, but swaps out the Pascal-based P100s with Volta-based V100s. Nvidia is also greatly expanding the system: from 124 nodes to 660 nodes. Once complete in early 2018, it will offer 40 petaflops of peak double-precision floating point performance, Nvidia said. An early version of the system appeared on the 50th Top500 list (revealed Monday), delivering 1.07 Linpack petaflops in 30 DGX-1 nodes, sufficient for a 149th ranking. That system, installed at Nvidia headquarters in San Jose, Calif., also secured the fourth highest spot on the Green500 listing.
“This is one of the fastest and greenest supercomputers in the world and we use it for our high-performance computing software stack development,” said Huang.
“I believe this is the future of software development,” he continued. “Until now, most of our software engineers coded on their laptop, they compiled it and ran regression tests in the datacenter. Now we have to have our own supercomputing infrastructure. I believe every single industry, every single company will eventually have to have high performance computing infrastructures, opening up the opportunity for the HPC industry.”
–Tiffany Trader contributed to this report.