It’s no secret that NVIDIA is eager to take elements of its successful GPGPU ecosystem developed in partnership with the HPC community and transfer these to the much broader, and more lucrative, consumer space. At SC15, CEO Jen-Hsun Huang referred to machine learning as HPC’s first consumer killer app. Google, Microsoft, and Facebook have all made major machine learning in-roads in recent months and GPUs are a key element in all of these projects.
NVIDIA has been especially keen to apply its deep learning prowess to enable autonomous vehicles. Yesterday, the GPU vendor launched NVIDIA DRIVE PX 2, an autonomous vehicle development platform powered by the 16nm FinFET-based Pascal GPU, the named successor to Maxwell. Like last year’s DRIVE PX, the next-gen development platform targets NVIDIA’s automotive partners, a growing list that includes Audi, BMW, Daimler, Ford and dozens more.
Equipped with two Tegra SOCs with ARM cores plus two discrete Pascal GPUs, the new platform is capable of delivering up to 24 trillion deep learning operations per second — 10 times what the previous-generation product offered. In terms of general computing capability, the PX 2 offers an aggregate of 8 teraflops of single-precision performance, a four-fold increase over the PX 1. In addition to pertinent interfaces and middleware, the development platform includes the Caffe deep learning framework to run DNN models designed and trained on DIGITS, NVIDIA’s interactive deep learning training system.
NVIDIA is touting the system as enabling self-driving applications to be developed faster and more accurately. At CES 2016, Huang called the system “the world’s first in-car AI supercomputer designed to make it possible to realize the vision of self-driving cars.” The computational capability is, according to the CEO, equivalent to 150 MacBook Pros, and its liquid-cooled “lunchbox” form factor fits easily in the glove box or the trunk. Yes, it’s liquid cooled, a first in the automotive computing space — another sign that the traditional lines (between HPC, embedded, consumer and so forth) are blurring.
Reprising a conversation he had with Elon Musk on stage at GTC15, Huang noted that humans are the least reliable part of the car, responsible for most of the one million automotive-related fatalities each year. Thus, said Huang, replacing the human altogether will make a great contribution to society. Perception is the main issue and deep learning is able to achieve super-human perception capability. DRIVE PX 2 can process 12 video cameras, plus lidar, radar and ultrasonic sensors. This 360 degree assessment makes it possible to detect objects, identify them and their position relative to the car, and then calculate a safe and comfortable trajectory.
Some fifty automotive makers are using PX 1 and there is already one marquis partner for the new product: Volvo. As part of the world’s first public trial of autonomous driving, the Swedish automaker will lease 100 XC90 luxury SUVs outfitted with DRIVE PX 2 technology to customers in Sweden.
“Drivers deal with an infinitely complex world,” said Jen-Hsun Huang, co-founder and CEO of NVIDIA. “Modern artificial intelligence and GPU breakthroughs enable us to finally tackle the daunting challenges of self-driving cars.
“NVIDIA’s GPU is central to advances in deep learning and supercomputing. We are leveraging these to create the brain of future autonomous vehicles that will be continuously alert, and eventually achieve superhuman levels of situational awareness. Autonomous cars will bring increased safety, new convenient mobility services and even beautiful urban designs — providing a powerful force for a better future.”
DRIVE PX 2 is scheduled for general availability in the fourth quarter of 2016 according to NVIDIA, with priority availability offered to early access development partners in the second quarter. The release of DRIVEWORKS, the new software development kit for DRIVE PX, is planned for early spring 2016.