Nvidia Focuses Its Cloud Containers on HPC Applications

By George Leopold

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing tasks as well as AI development with the addition of a container registry designed to deploy its GPU cloud for everything from visualization to drug discovery.

Nvidia CEO Jensen Huang, SC17 in Denver

In its drive to expand access to its Volta architecture, Nvidia announced the availability of its Tesla V100 GPU on Microsoft Azure. Azure is the latest cloud service to join the chipmaker’s growing list of public and private cloud services providers along server makers. Most are offering the “GPU accelerated” services for AI development projects such as training deep learning models that require more processing cores and access to big data.

Moving beyond the AI market, Nvidia on Monday (Nov. 13) unveiled a container registry designed to ease deployment of HPC applications in the cloud. The container registry for scientific computing applications and visualization tools would connect researchers with most GPU-optimized HPC software, the company said during this week’s SC17 conference in Denver.

Last month, the company introduced deep learning applications and AI frameworks in its Nvidia GPU Cloud (NGC) container registry. The AI container registry was rolled out on Amazon Web Services’ Elastic Compute Cloud instances running on Tesla V100 GPUs.

The HPC application containers announced this week include a long list of third-party scientific applications. HPC visualization containers are available in beta on the GPU cloud.

As GPU processing moves wholesale to the cloud and datacenters, easing application deployment was the next logical step as Nvidia extends its reach beyond AI development to scientific computing. (The company notes that the 2017 Nobel Prize winners in chemistry and physics used it CUDA parallel computing and API model. Nvidia’s Volta architecture includes more than 5,000 CUDA cores.)

HPC containers are designed to package the libraries and dependencies needed to run scientific applications on top of container infrastructure such as Docker Engine. The cloud container registry for delivering HPC applications uses Nvidia’s Docker distribution to run visualizations and other tasks in GPU-accelerated clouds. The service is available now.

Underpinning these scientific workloads in the cloud is the Volta architecture, asserts Nvidia CEO Jensen Huang. “Volta has now enabled every researcher in the world to access…the most advanced high-performance computer in the world at the lowest possible price,” Huang claimed during SC17. “You can rent yourself a supercomputer for three dollars” per hour.

The other part of the GPU equation is the software stack and how it remains optimized. Hence, Nvidia has placed software components in the GPU cloud via its container registry. The containerized software stack can then be downloaded from Nvidia’s cloud and datacenter partners.

Emphasizing Nvidia’s drive to make GPU processing more accessible, Huang concluded: “In the final analysis, it’s got to be simple.”

Nvidia also took advantage of the SC17 launch pad to announce it is building a new supercomputer to enable high-performance workflows inside its company. SaturnV with Volta continues the tradition of the DGX-1 SaturnV that the company announced last year at SC16, but swaps out the Pascal-based P100s with Volta-based V100s. Nvidia is also greatly expanding the system: from 124 nodes to 660 nodes. Once complete in early 2018, it will offer 40 petaflops of peak double-precision floating point performance, Nvidia said. An early version of the system appeared on the 50th Top500 list (revealed Monday), delivering 1.07 Linpack petaflops in 30 DGX-1 nodes, sufficient for a 149th ranking. That system, installed at Nvidia headquarters in San Jose, Calif., also secured the fourth highest spot on the Green500 listing.

“This is one of the fastest and greenest supercomputers in the world and we use it for our high-performance computing software stack development,” said Huang.

“I believe this is the future of software development,” he continued. “Until now, most of our software engineers coded on their laptop, they compiled it and ran regression tests in the datacenter. Now we have to have our own supercomputing infrastructure. I believe every single industry, every single company will eventually have to have high performance computing infrastructures, opening up the opportunity for the HPC industry.”

–Tiffany Trader contributed to this report.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Organizations Partner to Rescue Petabytes of Data from the Arecibo Observatory

April 21, 2021

The Arecibo Observatory in Puerto Rico stood as the world’s largest single-aperture telescope for more than half a century, its grandiosity earning it a turn as a major filming location in the James Bond movie GoldenEy Read more…

MLPerf Issues New Inferencing Results, Adds Power Metrics, Nvidia Wins (Again)

April 21, 2021

MLPerf.org, the young ML benchmarking organization, today issued its third round of inferencing results (MLPerf Inference v1.0) intended to compare how well various systems and accelerators perform inferencing on a suite Read more…

Cerebras Doubles AI Performance with Second-Gen 7nm Wafer Scale Engine

April 20, 2021

Nearly two years since its massive 1.2 trillion transistor Wafer Scale Engine chip debuted at Hot Chips, Cerebras Systems is announcing its second-generation technology (WSE-2), which its says packs twice the performance Read more…

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

Supercomputer-Powered Climate Model Makes Startling Sea Level Rise Prediction

April 19, 2021

The climate science community is tasked with striking a difficult balance: inspiring precisely the amount of alarm commensurate to the climate crisis. Make estimates that are too conservative, and the public might not re Read more…

AWS Solution Channel

Research computing with RONIN on AWS

To allow more visibility into and management of Amazon Web Services (AWS) resources and expenses and minimize the cloud skills training required to operate these resources, AWS Partner RONIN created the RONIN research computing platform. Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the large research community it supports, it also sought to optimize Read more…

MLPerf Issues New Inferencing Results, Adds Power Metrics, Nvidia Wins (Again)

April 21, 2021

MLPerf.org, the young ML benchmarking organization, today issued its third round of inferencing results (MLPerf Inference v1.0) intended to compare how well var Read more…

Cerebras Doubles AI Performance with Second-Gen 7nm Wafer Scale Engine

April 20, 2021

Nearly two years since its massive 1.2 trillion transistor Wafer Scale Engine chip debuted at Hot Chips, Cerebras Systems is announcing its second-generation te Read more…

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the larg Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impa Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX developer kit. The Clara partnerships announced during... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire