GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

By John Russell

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, provided a brief but insightful portrait of Nvidia’s research organization. It’s perhaps not gigantic by large company standards, roughly 175 full-time researchers worldwide, but still sizable and quite impactful. At GTC, the exhibit hall was packed as usual with sparkly new products in various stages of readiness. It’s good to remember that many of these products were ushered into existence or somehow enabled by earlier work from Dally’s organization.

“We have had many successes, only a small number of them are listed here (in his presentation), and in my view what we really do is invent the future of Nvidia,” said Dally during his press briefing.

William Dally, Nvidia chief scientist and head of research

Nvidia must agree and politely declined to share Dally’s slides afterward. Perhaps a little corporate wariness is warranted. No matter – a few phone pics will do. In his 20-minute presentation, Dally hardly gushed secrets but did a nice job of laying out Nvidia’s research philosophy, broad organization, and even discussed a few of its current priorities. It’s probably not a surprise that optical interconnect is one pressing challenge being tackled and that work is in progress on “something that can go to 2 terabits per second per millimeter off the chip edge at 2 picojoules per bit.” More on that project later.

Presented here are most of Dally’s comments (lightly edited). They comprise an overview of Nvidia’s approach to thinking about and setting up the research function in a technology-driven company. Some of the material will be familiar; some may surprise you. Before jumping in it’s worth noting that Dally is well-qualified for the job. He was recruited from Stanford University in 2009 where he was chairman of the computer science department. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and received the 2010 Eckert-Mauchly Award. There’s short bio at the end of the article.

  1. Philosophy – What is Research’s Role at Nvidia?

To give you an idea of what we do I’ll give you our philosophy. Our goal is to stay ahead of most of Nvidia and try to do things that can’t be done in the product groups but will make a difference for the product groups. When I was talked into leaving the academic world in 2008 by Jensen and starting Nvidia research in its current incarnation, I spent time surveying a lot of other industrial research labs and found the most of them do great research or have huge impact on the company, but almost none of them did both. The ones that did great research tended to publish lots of papers but were completely disconnected from their company product groups. Others wound up sort of being consultants for the product groups and doing development work but not really doing much research. So I set as a goal for Nvidia research to walk this very narrow line between these two chasms, either of which will swallow you, to try to do great research and make a difference for the company.

Our philosophy on how to do research is to maximize what we learn for the minimum amount of effort put in. So in jumping into a project we basically look at the risk reward ratio, what would be the payout if we succeed and how much effort is it going to take to do this? We try to do experiments that are cheap, [require] very little effort but if successful will have a huge impact.

Another thing we do is we involve the product groups at the beginning of research projects rather than at the end and this has two really valuable consequences. One is that by involving them at the beginning they get sense of ownership in it. When we get done we’re not just popping up with something that they have never seen before but it is something that they have been sort of a god parent to from day one incubating the technology along. Probably more important though is we wind up, by getting them involved in the beginning, solving those real problems not some artificial academic imagined problem that we pose for ourselves. We wind up having the technology much easier for them to adopt.

  1. Organization – Nvidia Tackles Supply and Demand

Roughly we organize Nvidia research into two sides, a supply side that supplies technology to make GPUs better. So we have a group that does circuits, a group that does design methodologies for VLSI, architecture for GPUs, networks, and programming systems. Programming systems really sort of spans both sides. [The other side is] the demand side of Nvidia research, which is a part of the research lab that drives demand for GPUs. All of our work in AI – in perception of learning, applied deep learning, algorithms group in our Toronto and Tel Aviv AI labs – all drive demand by creating new algorithms for deep learning.

We also have three different graphics groups that drive demand in graphics. One of our goals is to continue to raise the bar of what good graphics is because if it ever stays stationary we would get eaten from below by commodity graphics. We recently opened a robotics lab. Our goal is to basically develop the techniques that will make robots of the future work closely with humans and be our partners and [that] Nvidia will be powering the brains for these robots. We have a lab that is looking into that.

The question mark down here (see slide) is our moonshot projects. So often we will basically pull people out of these different groups and kick off a moonshot. We had [one] a number of years ago to do a research GPU called Einstein and Einstein morphed into Volta and that wound up being a great success. Then we had one a few years after that were we wanted to make real time ray tracing a reality. We pulled people out of the graphics groups and architecture groups and kicked off a project that we basically internally called the TTU for the tree traversal unit and it became the RT cores in Turing. So we have been able to have a number of very successful integration projects across these different groups.

We are geographically diverse with many locations in the U.S. [and] Europe. We just opened a lab in Israel. I’d very much like to start a lab in Asia and it really requires finding the right leader to start the lab. We tend to build labs around leaders rather than picking a geography and then trying to something there.

  1. Engaging the Community – Publishing Ensures Quality; Open Sourcing Mobilizes Community Development

We publish pretty openly. Our goal is to engage the outside community and publishing serves as number of functions [such as] quality control. One of the things I have observed is the research labs that don’t publish quickly wind up doing mediocre research because the scrutiny of peer review, while harsh at times, really is a great quality control measure. If your work is not good enough to get into a top tier conference like NeurIPS, ICLR in AI or ISCA in architecture, or SIGGRAPH in graphics, then it’s not good. And you have to be honest with yourself about that.

In addition to publishing we file a number of patents building intellectual property for the company. We release a lot of open source software products and this is in many ways a higher impact thing than a publication because people immediately grab this open source software. Much of the GAN (generative adversarial network) work we’ve done with progressive GANs, by open sourcing it people immediately grab it and build on it and start doing really interesting work. [It’s] a way of having that community feed itself and it’s a way of making progress very rapidly. A small listing of some of the ore recent papers we’ve published in leading venues are here.

  1. Successes – So How’s It Working?

We’ve had a lot of really big technology transfer successes to the company and I have just listed a few of them here. Almost all of the work that Nvidia does in ray tracing started in Nvidia research. This includes our optics ray tracing product which is sort of the core of our professional graphics. That started as a project when Steve Parker, who is now our general manager of professional graphics, was a research director reporting to me. After it became a successful research project we basically moved Steve and his whole team into our content organization and turned it into product.

Then, as I said, we had a moonshot that developed that became the RT core in Turing and actually a lot of the algorithmic things that underlie that. Our very fast algorithms, [the] BVH trees that are important in sampling to decide what the right directions are to cast rays, all started out as projects in Nvidia research. DGX-2 is based on a switch we developed, the NVSwitch. That started as a project in Nvidia research. We had a vision of building what are essentially large virtual GPUs, a bunch of GPUs all sharing their memories with very fast bandwidth between them, and we were building a prototype of this in research based on FPGAs. The product group got wind of it and basically grabbed it out of our hands before we were ready to let it go. But that’s an example of successful development and transfer.

CuDNN [was developed] back when Bryan Catanzaro (vice president applied deep learning) was in our programming systems research group. I started a project, a joint project with Andrew Ng (Baidu) and recruited Bryan to be the Nvidia representative. The software that came out of that really sort of launched Nvidia into deep learning. Then a bunch of applications in deep learning, image inpainting, noise-to-noise denoising, and progressive GAN which really was the first GAN to crack the problem of producing a good high resolution images but trying not to learn everything at once but training the GAN progressively, starting with low resolution 4×4 images and slowly working your way up to 1k or 4K images. Here’s a more complete list (see slide). I won’t go into it because I want to leave time for the interesting stuff later.

  1. FutureScan – What’s on the Slab in the Lab

I picked three projects from other parts of Nvidia research to give you a flavor for the breadth of what we do. So one project we just kicked off. This is a collaboration with Columbia University and SUNY Poly to build a photonic version of NVLink and NVSwitch. Very often we gauge what research we do by trying to find gaps by projecting our [current] technologies forward and looking to where we are going to come up short. One place we are going to come up short is in off-chip bandwidth, which is constrained both by bits per second per millimeter of chip edge, how many bits you can get on and off the chip given a certain parameter, and energy, picojoules per bit, getting that off the chip. So electrical signaling is pretty much on its last legs. We are going to be revving future versions of NVLink to 50 gigabits per second, 100 gigabits per second, per pin, but then we’re kind of out of gas. What can we do beyond that?

What we do is we go to optics and our plan is to produce something that can go to 2 terabits per second per millimeter off the chip edge at 2 picojoules per bit which is about an order of magnitude better than what we do today on both of those dimensions. The way we are able to do this is by producing a comb laser source. Basically it is a bunch of different frequency tones that we then can modulate each tone so we are able to put about 400 gigabits per second on a single fiber by having that broken up into 32 different wavelengths we can individual turn on and off. We can then connect up a DGX-like box with switches that have fibers coming in and out of them and GPUs that have fibers coming in and out of them and build very large DGX systems with very high bandwidth and very low power.

We are also experimenting with scalable ways of doing deep learning. This also demonstrates a lot of work we do in building research prototypes. We basically recently taped out and evaluated what we call RCA 18 – research chip 2018 – which is a deep learning accelerator that can be scaled from a very small size, [from] a single one of these PEs (processing elements) up to 16 PEs on a small die. We have integrated 36 die on a single organic substrate and the advantage here is it demonstrated a lot of technologies one of which is very efficient signaling from die to die on an organic substrate, [and] one of which is a scalable deep learning architecture. This has an energy per op, let’s see if I can get the numbers right, I believe of about 100 femtojoules per op doing deep learning inferences so it’s actually quite a bit better than most other deep learning inference engines that are currently available and it is efficient in a batch size of one. We can have very low latency inference as well. The technology for signaling between the dies is something called ground reference signaling and that’s probably about the best you can do electrically before we have to jump to the optical thing I showed you previously.

One thing I am very excited about is it’s not so much a project as a new lab we started.  We opened a lab in Seattle, a picture of the building down there on the left and the beautiful view out the window and the café right above it, to basically invent the future of robotics (see slide). Robots today are basically very accurate positioning machines. They tend to operate [an] open loop. You know they move a spot welder or spray gun to a preprogrammed position so that that part you are operating on is where it is supposed to be. You’ll get very repeatable welds or painting or whatever it is you’ve programmed the robots to do but that’s not the future. That’s the past.

The future of robotics is robots that can interact with their environment and the thing that makes this future possible is deep learning. By building perception systems based on deep learning we can build robots that can react to things not being where they are supposed to be. They estimate their pose and plan their paths accordingly. We can have them work with humans avoiding injuring the humans in the process. So our view is that the future of robotics are robots interacting with their environment we are going to invent a lot of that future.

By using the kitchen [as an example], it’s hard to see most of it here, but this is a little Segway base with the robot arm on it is operating in a kitchen working along side a human to carry out tasks such as preparing a meal. If you think about it that’s a hard thing for a robot. You are dealing with awkward shapes. You are dealing with people who move things around in odd ways. So it is really stressing the capabilities of what robots can do. A lot of people have done interesting demos trying to use reinforcement learning training robots end-to-end. That doesn’t work in such a complex environment.

We are actually having to go back and look at sort a lot of classic robotics tasks like path finding and we recently came up with a way of doing path finding using Riemannian Motion Policies (RMP) policies. It’s able to better deal with an unknown environment and maneuvering the robot arm to avoid striking things. To do pose estimation, we use neural networks to estimate say if a box of noodles you want to cook on the counter, to estimate the pose of that box. We do that by estimating the pose, rendering an image of what that box will look like in that pose, comparing it to the real image, and iterating that to refine the pose estimate. It’s really very accurate and by putting those pose estimates together we’ve been able to have these robots carry out very interesting tasks in our kitchen.

  1. Wrap-up – Pick Good Projects; Don’t Waste Resources; Make a Productive Impact

To sort of summarize, Nvidia research is kind of like a big sandbox to play in. We get to play with neat technology that if successful will have positive impact one the company. We span from circuits at the low end, better signaling, both on that organic substrate, I consider the photonics also circuits, all the way up to applications and graphics, perception and learning and robotics. Our goal is to learn as much with the least amount of effort, to optimize our impact on the company and as part of doing that second thing we involve the product people from day one in a new research project so that they both influence that project and have more impact on the company and gather ownership in it. We have had many successes, only a small number of them are listed here, and in my view what we really do is invent the future of Nvidia.

Dally Bio from Nvidia web site:

https://research.nvidia.com/person/william-dally

Bill Dally joined NVIDIA in January 2009 as chief scientist, after spending 12 years at Stanford University, where he was chairman of the computer science department. Dally and his Stanford team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. Dally was previously at the Massachusetts Institute of Technology from 1986 to 1997, where he and his team built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. From 1983 to 1986, he was at California Institute of Technology (CalTech), where he designed the MOSSIM Simulation Engine and the Torus Routing chip, which pioneered “wormhole” routing and virtual-channel flow control. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and has received the ACM Eckert-Mauchly Award, the IEEE Seymour Cray Award, and the ACM Maurice Wilkes award. He has published over 250 papers, holds over 120 issued patents, and is an author of four textbooks. Dally received a bachelor’s degree in Electrical Engineering from Virginia Tech, a master’s in Electrical Engineering from Stanford University and a Ph.D. in Computer Science from CalTech. He was a cofounder of Velio Communications and Stream Processors.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

First All-Petaflops Top500 List Debuts; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafloppers only. The entry point for the new list is 1.022 petaf Read more…

By Tiffany Trader

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its intention to make Arm a full citizen in the processing arch Read more…

By Tiffany Trader

Final Countdown to ISC19: What to See

June 13, 2019

If you're attending the International Supercomputing Conference, taking place in Frankfurt next week (June 16-20), you're either packing, in transit, or are already ensconced at the venue. In any case, you're busy, so he Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

For decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Transforming Dark Data for Insights and Discoveries in Healthcare

Healthcare in the USA produces an enormous amount of patient-related data each year. It is likely that the average person will generate over one million gigabytes of health-related data across his or her lifetime, equivalent to 300 million books. Read more…

The US Global Weather Forecast System Just Got a Major Upgrade

June 13, 2019

The United States’ Global Forecast System (GFS) has received a major upgrade to its modeling capabilities. The new dynamical core that has been added to the GFS – its first new dynamical core in nearly 40 years – w Read more…

By Oliver Peckham

First All-Petaflops Top500 List Debuts; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Final Countdown to ISC19: What to See

June 13, 2019

If you're attending the International Supercomputing Conference, taking place in Frankfurt next week (June 16-20), you're either packing, in transit, or are alr Read more…

By Tiffany Trader

The US Global Weather Forecast System Just Got a Major Upgrade

June 13, 2019

The United States’ Global Forecast System (GFS) has received a major upgrade to its modeling capabilities. The new dynamical core that has been added to the G Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

The Spaceborne Computer Returns to Earth, and HPE Eyes an AI-Protected Spaceborne 2

June 10, 2019

After 615 days on the International Space Station (ISS), HPE’s Spaceborne Computer has returned to Earth. The computer touched down onboard the same SpaceX Dr Read more…

By Oliver Peckham

Building the Team: South African Style

June 9, 2019

We’re only eight days away from the start of the ISC 2019 Student Cluster Competition. Fourteen student teams from eleven countries will travel to Frankfurt, Read more…

By Dan Olds

Scientists Solve Cosmic Mystery Through Black Hole Simulations

June 6, 2019

An international team of researchers has finally solved a long-standing cosmic mystery – and to do it, they needed to produce the most detailed black hole simulation ever created. Read more…

By Oliver Peckham

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

HPE to Acquire Cray for $1.3B

May 17, 2019

Venerable supercomputer pioneer Cray Inc. will be acquired by Hewlett Packard Enterprise for $1.3 billion under a definitive agreement announced this morning. T Read more…

By Doug Black & Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This