GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

By John Russell

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, provided a brief but insightful portrait of Nvidia’s research organization. It’s perhaps not gigantic by large company standards, roughly 175 full-time researchers worldwide, but still sizable and quite impactful. At GTC, the exhibit hall was packed as usual with sparkly new products in various stages of readiness. It’s good to remember that many of these products were ushered into existence or somehow enabled by earlier work from Dally’s organization.

“We have had many successes, only a small number of them are listed here (in his presentation), and in my view what we really do is invent the future of Nvidia,” said Dally during his press briefing.

William Dally, Nvidia chief scientist and head of research

Nvidia must agree and politely declined to share Dally’s slides afterward. Perhaps a little corporate wariness is warranted. No matter – a few phone pics will do. In his 20-minute presentation, Dally hardly gushed secrets but did a nice job of laying out Nvidia’s research philosophy, broad organization, and even discussed a few of its current priorities. It’s probably not a surprise that optical interconnect is one pressing challenge being tackled and that work is in progress on “something that can go to 2 terabits per second per millimeter off the chip edge at 2 picojoules per bit.” More on that project later.

Presented here are most of Dally’s comments (lightly edited). They comprise an overview of Nvidia’s approach to thinking about and setting up the research function in a technology-driven company. Some of the material will be familiar; some may surprise you. Before jumping in it’s worth noting that Dally is well-qualified for the job. He was recruited from Stanford University in 2009 where he was chairman of the computer science department. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and received the 2010 Eckert-Mauchly Award. There’s short bio at the end of the article.

  1. Philosophy – What is Research’s Role at Nvidia?

To give you an idea of what we do I’ll give you our philosophy. Our goal is to stay ahead of most of Nvidia and try to do things that can’t be done in the product groups but will make a difference for the product groups. When I was talked into leaving the academic world in 2008 by Jensen and starting Nvidia research in its current incarnation, I spent time surveying a lot of other industrial research labs and found the most of them do great research or have huge impact on the company, but almost none of them did both. The ones that did great research tended to publish lots of papers but were completely disconnected from their company product groups. Others wound up sort of being consultants for the product groups and doing development work but not really doing much research. So I set as a goal for Nvidia research to walk this very narrow line between these two chasms, either of which will swallow you, to try to do great research and make a difference for the company.

Our philosophy on how to do research is to maximize what we learn for the minimum amount of effort put in. So in jumping into a project we basically look at the risk reward ratio, what would be the payout if we succeed and how much effort is it going to take to do this? We try to do experiments that are cheap, [require] very little effort but if successful will have a huge impact.

Another thing we do is we involve the product groups at the beginning of research projects rather than at the end and this has two really valuable consequences. One is that by involving them at the beginning they get sense of ownership in it. When we get done we’re not just popping up with something that they have never seen before but it is something that they have been sort of a god parent to from day one incubating the technology along. Probably more important though is we wind up, by getting them involved in the beginning, solving those real problems not some artificial academic imagined problem that we pose for ourselves. We wind up having the technology much easier for them to adopt.

  1. Organization – Nvidia Tackles Supply and Demand

Roughly we organize Nvidia research into two sides, a supply side that supplies technology to make GPUs better. So we have a group that does circuits, a group that does design methodologies for VLSI, architecture for GPUs, networks, and programming systems. Programming systems really sort of spans both sides. [The other side is] the demand side of Nvidia research, which is a part of the research lab that drives demand for GPUs. All of our work in AI – in perception of learning, applied deep learning, algorithms group in our Toronto and Tel Aviv AI labs – all drive demand by creating new algorithms for deep learning.

We also have three different graphics groups that drive demand in graphics. One of our goals is to continue to raise the bar of what good graphics is because if it ever stays stationary we would get eaten from below by commodity graphics. We recently opened a robotics lab. Our goal is to basically develop the techniques that will make robots of the future work closely with humans and be our partners and [that] Nvidia will be powering the brains for these robots. We have a lab that is looking into that.

The question mark down here (see slide) is our moonshot projects. So often we will basically pull people out of these different groups and kick off a moonshot. We had [one] a number of years ago to do a research GPU called Einstein and Einstein morphed into Volta and that wound up being a great success. Then we had one a few years after that were we wanted to make real time ray tracing a reality. We pulled people out of the graphics groups and architecture groups and kicked off a project that we basically internally called the TTU for the tree traversal unit and it became the RT cores in Turing. So we have been able to have a number of very successful integration projects across these different groups.

We are geographically diverse with many locations in the U.S. [and] Europe. We just opened a lab in Israel. I’d very much like to start a lab in Asia and it really requires finding the right leader to start the lab. We tend to build labs around leaders rather than picking a geography and then trying to something there.

  1. Engaging the Community – Publishing Ensures Quality; Open Sourcing Mobilizes Community Development

We publish pretty openly. Our goal is to engage the outside community and publishing serves as number of functions [such as] quality control. One of the things I have observed is the research labs that don’t publish quickly wind up doing mediocre research because the scrutiny of peer review, while harsh at times, really is a great quality control measure. If your work is not good enough to get into a top tier conference like NeurIPS, ICLR in AI or ISCA in architecture, or SIGGRAPH in graphics, then it’s not good. And you have to be honest with yourself about that.

In addition to publishing we file a number of patents building intellectual property for the company. We release a lot of open source software products and this is in many ways a higher impact thing than a publication because people immediately grab this open source software. Much of the GAN (generative adversarial network) work we’ve done with progressive GANs, by open sourcing it people immediately grab it and build on it and start doing really interesting work. [It’s] a way of having that community feed itself and it’s a way of making progress very rapidly. A small listing of some of the ore recent papers we’ve published in leading venues are here.

  1. Successes – So How’s It Working?

We’ve had a lot of really big technology transfer successes to the company and I have just listed a few of them here. Almost all of the work that Nvidia does in ray tracing started in Nvidia research. This includes our optics ray tracing product which is sort of the core of our professional graphics. That started as a project when Steve Parker, who is now our general manager of professional graphics, was a research director reporting to me. After it became a successful research project we basically moved Steve and his whole team into our content organization and turned it into product.

Then, as I said, we had a moonshot that developed that became the RT core in Turing and actually a lot of the algorithmic things that underlie that. Our very fast algorithms, [the] BVH trees that are important in sampling to decide what the right directions are to cast rays, all started out as projects in Nvidia research. DGX-2 is based on a switch we developed, the NVSwitch. That started as a project in Nvidia research. We had a vision of building what are essentially large virtual GPUs, a bunch of GPUs all sharing their memories with very fast bandwidth between them, and we were building a prototype of this in research based on FPGAs. The product group got wind of it and basically grabbed it out of our hands before we were ready to let it go. But that’s an example of successful development and transfer.

CuDNN [was developed] back when Bryan Catanzaro (vice president applied deep learning) was in our programming systems research group. I started a project, a joint project with Andrew Ng (Baidu) and recruited Bryan to be the Nvidia representative. The software that came out of that really sort of launched Nvidia into deep learning. Then a bunch of applications in deep learning, image inpainting, noise-to-noise denoising, and progressive GAN which really was the first GAN to crack the problem of producing a good high resolution images but trying not to learn everything at once but training the GAN progressively, starting with low resolution 4×4 images and slowly working your way up to 1k or 4K images. Here’s a more complete list (see slide). I won’t go into it because I want to leave time for the interesting stuff later.

  1. FutureScan – What’s on the Slab in the Lab

I picked three projects from other parts of Nvidia research to give you a flavor for the breadth of what we do. So one project we just kicked off. This is a collaboration with Columbia University and SUNY Poly to build a photonic version of NVLink and NVSwitch. Very often we gauge what research we do by trying to find gaps by projecting our [current] technologies forward and looking to where we are going to come up short. One place we are going to come up short is in off-chip bandwidth, which is constrained both by bits per second per millimeter of chip edge, how many bits you can get on and off the chip given a certain parameter, and energy, picojoules per bit, getting that off the chip. So electrical signaling is pretty much on its last legs. We are going to be revving future versions of NVLink to 50 gigabits per second, 100 gigabits per second, per pin, but then we’re kind of out of gas. What can we do beyond that?

What we do is we go to optics and our plan is to produce something that can go to 2 terabits per second per millimeter off the chip edge at 2 picojoules per bit which is about an order of magnitude better than what we do today on both of those dimensions. The way we are able to do this is by producing a comb laser source. Basically it is a bunch of different frequency tones that we then can modulate each tone so we are able to put about 400 gigabits per second on a single fiber by having that broken up into 32 different wavelengths we can individual turn on and off. We can then connect up a DGX-like box with switches that have fibers coming in and out of them and GPUs that have fibers coming in and out of them and build very large DGX systems with very high bandwidth and very low power.

We are also experimenting with scalable ways of doing deep learning. This also demonstrates a lot of work we do in building research prototypes. We basically recently taped out and evaluated what we call RCA 18 – research chip 2018 – which is a deep learning accelerator that can be scaled from a very small size, [from] a single one of these PEs (processing elements) up to 16 PEs on a small die. We have integrated 36 die on a single organic substrate and the advantage here is it demonstrated a lot of technologies one of which is very efficient signaling from die to die on an organic substrate, [and] one of which is a scalable deep learning architecture. This has an energy per op, let’s see if I can get the numbers right, I believe of about 100 femtojoules per op doing deep learning inferences so it’s actually quite a bit better than most other deep learning inference engines that are currently available and it is efficient in a batch size of one. We can have very low latency inference as well. The technology for signaling between the dies is something called ground reference signaling and that’s probably about the best you can do electrically before we have to jump to the optical thing I showed you previously.

One thing I am very excited about is it’s not so much a project as a new lab we started.  We opened a lab in Seattle, a picture of the building down there on the left and the beautiful view out the window and the café right above it, to basically invent the future of robotics (see slide). Robots today are basically very accurate positioning machines. They tend to operate [an] open loop. You know they move a spot welder or spray gun to a preprogrammed position so that that part you are operating on is where it is supposed to be. You’ll get very repeatable welds or painting or whatever it is you’ve programmed the robots to do but that’s not the future. That’s the past.

The future of robotics is robots that can interact with their environment and the thing that makes this future possible is deep learning. By building perception systems based on deep learning we can build robots that can react to things not being where they are supposed to be. They estimate their pose and plan their paths accordingly. We can have them work with humans avoiding injuring the humans in the process. So our view is that the future of robotics are robots interacting with their environment we are going to invent a lot of that future.

By using the kitchen [as an example], it’s hard to see most of it here, but this is a little Segway base with the robot arm on it is operating in a kitchen working along side a human to carry out tasks such as preparing a meal. If you think about it that’s a hard thing for a robot. You are dealing with awkward shapes. You are dealing with people who move things around in odd ways. So it is really stressing the capabilities of what robots can do. A lot of people have done interesting demos trying to use reinforcement learning training robots end-to-end. That doesn’t work in such a complex environment.

We are actually having to go back and look at sort a lot of classic robotics tasks like path finding and we recently came up with a way of doing path finding using Riemannian Motion Policies (RMP) policies. It’s able to better deal with an unknown environment and maneuvering the robot arm to avoid striking things. To do pose estimation, we use neural networks to estimate say if a box of noodles you want to cook on the counter, to estimate the pose of that box. We do that by estimating the pose, rendering an image of what that box will look like in that pose, comparing it to the real image, and iterating that to refine the pose estimate. It’s really very accurate and by putting those pose estimates together we’ve been able to have these robots carry out very interesting tasks in our kitchen.

  1. Wrap-up – Pick Good Projects; Don’t Waste Resources; Make a Productive Impact

To sort of summarize, Nvidia research is kind of like a big sandbox to play in. We get to play with neat technology that if successful will have positive impact one the company. We span from circuits at the low end, better signaling, both on that organic substrate, I consider the photonics also circuits, all the way up to applications and graphics, perception and learning and robotics. Our goal is to learn as much with the least amount of effort, to optimize our impact on the company and as part of doing that second thing we involve the product people from day one in a new research project so that they both influence that project and have more impact on the company and gather ownership in it. We have had many successes, only a small number of them are listed here, and in my view what we really do is invent the future of Nvidia.

Dally Bio from Nvidia web site:

https://research.nvidia.com/person/william-dally

Bill Dally joined NVIDIA in January 2009 as chief scientist, after spending 12 years at Stanford University, where he was chairman of the computer science department. Dally and his Stanford team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. Dally was previously at the Massachusetts Institute of Technology from 1986 to 1997, where he and his team built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. From 1983 to 1986, he was at California Institute of Technology (CalTech), where he designed the MOSSIM Simulation Engine and the Torus Routing chip, which pioneered “wormhole” routing and virtual-channel flow control. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and has received the ACM Eckert-Mauchly Award, the IEEE Seymour Cray Award, and the ACM Maurice Wilkes award. He has published over 250 papers, holds over 120 issued patents, and is an author of four textbooks. Dally received a bachelor’s degree in Electrical Engineering from Virginia Tech, a master’s in Electrical Engineering from Stanford University and a Ph.D. in Computer Science from CalTech. He was a cofounder of Velio Communications and Stream Processors.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Is Data Science the Fourth Pillar of the Scientific Method?

April 18, 2019

Nvidia CEO Jensen Huang revived a decade-old debate last month when he said that modern data science (AI plus HPC) has become the fourth pillar of the scientific method. While some disagree with the notion that statistic Read more…

By Alex Woodie

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing the bounds of what's possible in business and science, in w Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Google Open Sources TensorFlow Version of MorphNet DL Tool

April 18, 2019

Designing optimum deep neural networks remains a non-trivial exercise. “Given the large search space of possible architectures, designing a network from scratch for your specific application can be prohibitively expens Read more…

By John Russell

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Bridging HPC and Cloud Native Development with Kubernetes

The HPC community has historically developed its own specialized software stack including schedulers, filesystems, developer tools, container technologies tuned for performance and large-scale on-premises deployments. Read more…

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the General Chair of SC19 -- is an ACM Distinguished Scientist. Read more…

By HPCwire Editorial Team

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Intel Gold U-Series SKUs Reveal Single Socket Intentions

April 18, 2019

Intel plans to jump into the single socket market with a portion of its just announced Cascade Lake microprocessor line according to one media report. This isn Read more…

By John Russell

BSC Researchers Shrink Floating Point Formats to Accelerate Deep Neural Network Training

April 15, 2019

Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stag Read more…

By Ken Strandberg

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

Nvidia Doubles Down on Medical AI

April 9, 2019

Nvidia is collaborating with medical groups to push GPU-powered AI tools into clinical settings, including radiology and drug discovery. The GPU leader said Monday it will collaborate with the American College of Radiology (ACR) to provide clinicians with its Clara AI tool kit. The partnership would allow radiologists to leverage AI techniques for diagnostic imaging using their own clinical data. Read more…

By George Leopold

Digging into MLPerf Benchmark Suite to Inform AI Infrastructure Decisions

April 9, 2019

With machine learning and deep learning storming into the datacenter, the new challenge is optimizing infrastructure choices to support diverse ML and DL workfl Read more…

By John Russell

AI and Enterprise Datacenters Boost HPC Server Revenues Past Expectations – Hyperion

April 9, 2019

Building on the big year of 2017 and spurred in part by the convergence of AI and HPC, global revenue for high performance servers jumped 15.6 percent last year Read more…

By Doug Black

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Microsoft to Buy Mellanox?

December 20, 2018

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…

By Doug Black

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This