GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

By John Russell

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, provided a brief but insightful portrait of Nvidia’s research organization. It’s perhaps not gigantic by large company standards, roughly 175 full-time researchers worldwide, but still sizable and quite impactful. At GTC, the exhibit hall was packed as usual with sparkly new products in various stages of readiness. It’s good to remember that many of these products were ushered into existence or somehow enabled by earlier work from Dally’s organization.

“We have had many successes, only a small number of them are listed here (in his presentation), and in my view what we really do is invent the future of Nvidia,” said Dally during his press briefing.

William Dally, Nvidia chief scientist and head of research

Nvidia must agree and politely declined to share Dally’s slides afterward. Perhaps a little corporate wariness is warranted. No matter – a few phone pics will do. In his 20-minute presentation, Dally hardly gushed secrets but did a nice job of laying out Nvidia’s research philosophy, broad organization, and even discussed a few of its current priorities. It’s probably not a surprise that optical interconnect is one pressing challenge being tackled and that work is in progress on “something that can go to 2 terabits per second per millimeter off the chip edge at 2 picojoules per bit.” More on that project later.

Presented here are most of Dally’s comments (lightly edited). They comprise an overview of Nvidia’s approach to thinking about and setting up the research function in a technology-driven company. Some of the material will be familiar; some may surprise you. Before jumping in it’s worth noting that Dally is well-qualified for the job. He was recruited from Stanford University in 2009 where he was chairman of the computer science department. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and received the 2010 Eckert-Mauchly Award. There’s short bio at the end of the article.

  1. Philosophy – What is Research’s Role at Nvidia?

To give you an idea of what we do I’ll give you our philosophy. Our goal is to stay ahead of most of Nvidia and try to do things that can’t be done in the product groups but will make a difference for the product groups. When I was talked into leaving the academic world in 2008 by Jensen and starting Nvidia research in its current incarnation, I spent time surveying a lot of other industrial research labs and found the most of them do great research or have huge impact on the company, but almost none of them did both. The ones that did great research tended to publish lots of papers but were completely disconnected from their company product groups. Others wound up sort of being consultants for the product groups and doing development work but not really doing much research. So I set as a goal for Nvidia research to walk this very narrow line between these two chasms, either of which will swallow you, to try to do great research and make a difference for the company.

Our philosophy on how to do research is to maximize what we learn for the minimum amount of effort put in. So in jumping into a project we basically look at the risk reward ratio, what would be the payout if we succeed and how much effort is it going to take to do this? We try to do experiments that are cheap, [require] very little effort but if successful will have a huge impact.

Another thing we do is we involve the product groups at the beginning of research projects rather than at the end and this has two really valuable consequences. One is that by involving them at the beginning they get sense of ownership in it. When we get done we’re not just popping up with something that they have never seen before but it is something that they have been sort of a god parent to from day one incubating the technology along. Probably more important though is we wind up, by getting them involved in the beginning, solving those real problems not some artificial academic imagined problem that we pose for ourselves. We wind up having the technology much easier for them to adopt.

  1. Organization – Nvidia Tackles Supply and Demand

Roughly we organize Nvidia research into two sides, a supply side that supplies technology to make GPUs better. So we have a group that does circuits, a group that does design methodologies for VLSI, architecture for GPUs, networks, and programming systems. Programming systems really sort of spans both sides. [The other side is] the demand side of Nvidia research, which is a part of the research lab that drives demand for GPUs. All of our work in AI – in perception of learning, applied deep learning, algorithms group in our Toronto and Tel Aviv AI labs – all drive demand by creating new algorithms for deep learning.

We also have three different graphics groups that drive demand in graphics. One of our goals is to continue to raise the bar of what good graphics is because if it ever stays stationary we would get eaten from below by commodity graphics. We recently opened a robotics lab. Our goal is to basically develop the techniques that will make robots of the future work closely with humans and be our partners and [that] Nvidia will be powering the brains for these robots. We have a lab that is looking into that.

The question mark down here (see slide) is our moonshot projects. So often we will basically pull people out of these different groups and kick off a moonshot. We had [one] a number of years ago to do a research GPU called Einstein and Einstein morphed into Volta and that wound up being a great success. Then we had one a few years after that were we wanted to make real time ray tracing a reality. We pulled people out of the graphics groups and architecture groups and kicked off a project that we basically internally called the TTU for the tree traversal unit and it became the RT cores in Turing. So we have been able to have a number of very successful integration projects across these different groups.

We are geographically diverse with many locations in the U.S. [and] Europe. We just opened a lab in Israel. I’d very much like to start a lab in Asia and it really requires finding the right leader to start the lab. We tend to build labs around leaders rather than picking a geography and then trying to something there.

  1. Engaging the Community – Publishing Ensures Quality; Open Sourcing Mobilizes Community Development

We publish pretty openly. Our goal is to engage the outside community and publishing serves as number of functions [such as] quality control. One of the things I have observed is the research labs that don’t publish quickly wind up doing mediocre research because the scrutiny of peer review, while harsh at times, really is a great quality control measure. If your work is not good enough to get into a top tier conference like NeurIPS, ICLR in AI or ISCA in architecture, or SIGGRAPH in graphics, then it’s not good. And you have to be honest with yourself about that.

In addition to publishing we file a number of patents building intellectual property for the company. We release a lot of open source software products and this is in many ways a higher impact thing than a publication because people immediately grab this open source software. Much of the GAN (generative adversarial network) work we’ve done with progressive GANs, by open sourcing it people immediately grab it and build on it and start doing really interesting work. [It’s] a way of having that community feed itself and it’s a way of making progress very rapidly. A small listing of some of the ore recent papers we’ve published in leading venues are here.

  1. Successes – So How’s It Working?

We’ve had a lot of really big technology transfer successes to the company and I have just listed a few of them here. Almost all of the work that Nvidia does in ray tracing started in Nvidia research. This includes our optics ray tracing product which is sort of the core of our professional graphics. That started as a project when Steve Parker, who is now our general manager of professional graphics, was a research director reporting to me. After it became a successful research project we basically moved Steve and his whole team into our content organization and turned it into product.

Then, as I said, we had a moonshot that developed that became the RT core in Turing and actually a lot of the algorithmic things that underlie that. Our very fast algorithms, [the] BVH trees that are important in sampling to decide what the right directions are to cast rays, all started out as projects in Nvidia research. DGX-2 is based on a switch we developed, the NVSwitch. That started as a project in Nvidia research. We had a vision of building what are essentially large virtual GPUs, a bunch of GPUs all sharing their memories with very fast bandwidth between them, and we were building a prototype of this in research based on FPGAs. The product group got wind of it and basically grabbed it out of our hands before we were ready to let it go. But that’s an example of successful development and transfer.

CuDNN [was developed] back when Bryan Catanzaro (vice president applied deep learning) was in our programming systems research group. I started a project, a joint project with Andrew Ng (Baidu) and recruited Bryan to be the Nvidia representative. The software that came out of that really sort of launched Nvidia into deep learning. Then a bunch of applications in deep learning, image inpainting, noise-to-noise denoising, and progressive GAN which really was the first GAN to crack the problem of producing a good high resolution images but trying not to learn everything at once but training the GAN progressively, starting with low resolution 4×4 images and slowly working your way up to 1k or 4K images. Here’s a more complete list (see slide). I won’t go into it because I want to leave time for the interesting stuff later.

  1. FutureScan – What’s on the Slab in the Lab

I picked three projects from other parts of Nvidia research to give you a flavor for the breadth of what we do. So one project we just kicked off. This is a collaboration with Columbia University and SUNY Poly to build a photonic version of NVLink and NVSwitch. Very often we gauge what research we do by trying to find gaps by projecting our [current] technologies forward and looking to where we are going to come up short. One place we are going to come up short is in off-chip bandwidth, which is constrained both by bits per second per millimeter of chip edge, how many bits you can get on and off the chip given a certain parameter, and energy, picojoules per bit, getting that off the chip. So electrical signaling is pretty much on its last legs. We are going to be revving future versions of NVLink to 50 gigabits per second, 100 gigabits per second, per pin, but then we’re kind of out of gas. What can we do beyond that?

What we do is we go to optics and our plan is to produce something that can go to 2 terabits per second per millimeter off the chip edge at 2 picojoules per bit which is about an order of magnitude better than what we do today on both of those dimensions. The way we are able to do this is by producing a comb laser source. Basically it is a bunch of different frequency tones that we then can modulate each tone so we are able to put about 400 gigabits per second on a single fiber by having that broken up into 32 different wavelengths we can individual turn on and off. We can then connect up a DGX-like box with switches that have fibers coming in and out of them and GPUs that have fibers coming in and out of them and build very large DGX systems with very high bandwidth and very low power.

We are also experimenting with scalable ways of doing deep learning. This also demonstrates a lot of work we do in building research prototypes. We basically recently taped out and evaluated what we call RCA 18 – research chip 2018 – which is a deep learning accelerator that can be scaled from a very small size, [from] a single one of these PEs (processing elements) up to 16 PEs on a small die. We have integrated 36 die on a single organic substrate and the advantage here is it demonstrated a lot of technologies one of which is very efficient signaling from die to die on an organic substrate, [and] one of which is a scalable deep learning architecture. This has an energy per op, let’s see if I can get the numbers right, I believe of about 100 femtojoules per op doing deep learning inferences so it’s actually quite a bit better than most other deep learning inference engines that are currently available and it is efficient in a batch size of one. We can have very low latency inference as well. The technology for signaling between the dies is something called ground reference signaling and that’s probably about the best you can do electrically before we have to jump to the optical thing I showed you previously.

One thing I am very excited about is it’s not so much a project as a new lab we started.  We opened a lab in Seattle, a picture of the building down there on the left and the beautiful view out the window and the café right above it, to basically invent the future of robotics (see slide). Robots today are basically very accurate positioning machines. They tend to operate [an] open loop. You know they move a spot welder or spray gun to a preprogrammed position so that that part you are operating on is where it is supposed to be. You’ll get very repeatable welds or painting or whatever it is you’ve programmed the robots to do but that’s not the future. That’s the past.

The future of robotics is robots that can interact with their environment and the thing that makes this future possible is deep learning. By building perception systems based on deep learning we can build robots that can react to things not being where they are supposed to be. They estimate their pose and plan their paths accordingly. We can have them work with humans avoiding injuring the humans in the process. So our view is that the future of robotics are robots interacting with their environment we are going to invent a lot of that future.

By using the kitchen [as an example], it’s hard to see most of it here, but this is a little Segway base with the robot arm on it is operating in a kitchen working along side a human to carry out tasks such as preparing a meal. If you think about it that’s a hard thing for a robot. You are dealing with awkward shapes. You are dealing with people who move things around in odd ways. So it is really stressing the capabilities of what robots can do. A lot of people have done interesting demos trying to use reinforcement learning training robots end-to-end. That doesn’t work in such a complex environment.

We are actually having to go back and look at sort a lot of classic robotics tasks like path finding and we recently came up with a way of doing path finding using Riemannian Motion Policies (RMP) policies. It’s able to better deal with an unknown environment and maneuvering the robot arm to avoid striking things. To do pose estimation, we use neural networks to estimate say if a box of noodles you want to cook on the counter, to estimate the pose of that box. We do that by estimating the pose, rendering an image of what that box will look like in that pose, comparing it to the real image, and iterating that to refine the pose estimate. It’s really very accurate and by putting those pose estimates together we’ve been able to have these robots carry out very interesting tasks in our kitchen.

  1. Wrap-up – Pick Good Projects; Don’t Waste Resources; Make a Productive Impact

To sort of summarize, Nvidia research is kind of like a big sandbox to play in. We get to play with neat technology that if successful will have positive impact one the company. We span from circuits at the low end, better signaling, both on that organic substrate, I consider the photonics also circuits, all the way up to applications and graphics, perception and learning and robotics. Our goal is to learn as much with the least amount of effort, to optimize our impact on the company and as part of doing that second thing we involve the product people from day one in a new research project so that they both influence that project and have more impact on the company and gather ownership in it. We have had many successes, only a small number of them are listed here, and in my view what we really do is invent the future of Nvidia.

Dally Bio from Nvidia web site:

https://research.nvidia.com/person/william-dally

Bill Dally joined NVIDIA in January 2009 as chief scientist, after spending 12 years at Stanford University, where he was chairman of the computer science department. Dally and his Stanford team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. Dally was previously at the Massachusetts Institute of Technology from 1986 to 1997, where he and his team built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. From 1983 to 1986, he was at California Institute of Technology (CalTech), where he designed the MOSSIM Simulation Engine and the Torus Routing chip, which pioneered “wormhole” routing and virtual-channel flow control. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and has received the ACM Eckert-Mauchly Award, the IEEE Seymour Cray Award, and the ACM Maurice Wilkes award. He has published over 250 papers, holds over 120 issued patents, and is an author of four textbooks. Dally received a bachelor’s degree in Electrical Engineering from Virginia Tech, a master’s in Electrical Engineering from Stanford University and a Ph.D. in Computer Science from CalTech. He was a cofounder of Velio Communications and Stream Processors.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Q&A with Altair CEO James Scapa, an HPCwire Person to Watch in 2021

May 14, 2021

Chairman, CEO and co-founder of Altair James R. Scapa closed several acquisitions for the company in 2020, including the purchase and integration of Univa and Ellexus. Scapa founded Altair more than 35 years ago with two Read more…

HLRS HPC Helps to Model Muscle Movements

May 13, 2021

The growing scale of HPC is allowing simulation of more and more complex systems at greater detail than ever before, particularly in the biological research spheres. Now, researchers at the University of Stuttgart are le Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst firm Hyperion Research at the HPC User Forum being held this we Read more…

AWS Solution Channel

Numerical weather prediction on AWS Graviton2

The Weather Research and Forecasting (WRF) model is a numerical weather prediction (NWP) system designed to serve both atmospheric research and operational forecasting needs. Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although the HPC server market had been facing a 6.7 percent COVID-re Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst fir Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although Read more…

IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x spe Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Fast Pass Through (Some of) the Quantum Landscape with ORNL’s Raphael Pooser

May 7, 2021

In a rather remarkable way, and despite the frequent hype, the behind-the-scenes work of developing quantum computing has dramatically accelerated in the past f Read more…

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire